0% found this document useful (0 votes)
56 views36 pages

Robotics EC368 Module 3

The document discusses robotic vision systems and kinematics. It describes imaging, sensing, digitization and image processing techniques used in robotic vision. It also explains concepts like position, orientation, rotation, and representation of rigid motion using homogeneous transformation matrices in kinematics.

Uploaded by

kayyur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views36 pages

Robotics EC368 Module 3

The document discusses robotic vision systems and kinematics. It describes imaging, sensing, digitization and image processing techniques used in robotic vision. It also explains concepts like position, orientation, rotation, and representation of rigid motion using homogeneous transformation matrices in kinematics.

Uploaded by

kayyur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Module 3

Robotic vision systems: Imaging, Sensing and Digitization, Image


processing techniques, Areas of application in robotics.

Introduction to kinematics: Position and orientation of objects,


Rotation, Euler angles, Rigid motion representation using
Homogenous Transformation matrix.
Robotic vision systems

• Highly Sophisticated sensor system in robots


• These sensors are used for interaction with the environment
(External sensors).
• Uses a camera for capturing the scene.
• The images are processed using image processing techniques
for further analysis.
• This system is also known as Computer vision.
Object Detection, Face recognition Automatic Navigation
Images
• The images obtained from normal cameras are 2D images.
• They lack depth information.
• Each point in the image will be represented using 2 axes.
• Images obtained from CT scanners will be of 3D (contains depth
information).
• A digital image is a rectangular array of dots or picture elements
called PIXELS, arranged in the form of a matrix with m rows and
n columns. The expression m×n is called the resolution of the
image.

1 pixel
• The elements (values) of the matrix is the intensity of light coming

from the object.


• So an Image is the collection of data representing the light
Types of Images:

1. Bi-level (or monochromatic) image:


This is an image where the pixels can have one of two values, normally
referred to as black and white. Each pixel in such an image is represented by
one bit, making this the simplest type of image.

2. Grayscale image: A pixel can have one of the 2n shades of gray (or shades
of some other color). The value of n is normally compatible with a byte size;
i.e., it is 4, 8, 12, 16, 24, or some other convenient multiple of 4 or 8.
The set of the most-significant bits of all the pixels is the most-significant
bitplane. Thus, a grayscale image has n bitplanes.

3. Continuous-tone image: Image can have many similar colours.


A pixel is represented by either a single large number (in the case of many
grayscales) or three components (in the case of a colour image). A
continuous-tone image is normally a natural image and is obtained by
taking a photograph with a digital camera, or by scanning a photograph or a
painting.
Stages of Robotic Vision

1. Sensing:
• The process of capturing or yielding an image.
• Images can be captured using Analog Cameras and Digital
cameras.
• But for computer vision, images should be digitized.
• These digitized images are stored in the computer memory in the
binary form (1s and 0s).

2. Preprocessing:
• Preprocessing is done to convert the image into a proper form to
be processed by a processor.
• It includes, noise remover, color conversions, enhancements etc.
3. Segmentation:
• Process of partitioning image into objects of interest.

4. Description/Feature extraction
• Required features in an image is extracted to differentiate
different objects in an image.

5. Recognition
• Identifying different objects in the images.
• Used for classification purposes, detection of objects etc.

6. Interpretation
• After doing all the processing of the objects, assign meaning to
the recognized objects.
Image:
• Image refers to a two dimensional light intensity function
denoted by f(x,y), where x and y denote the spatial co-
ordinates.

• The magnitude of f(.) at spatial co-ordinates (x,y) gives the


light intensity of image at that point.

• Colour Image – 3 samples per pixel


• Intensity (Gray level image) – 1 sample per pixel
• Binary Image – 1 sample per pixel
Image Acquisition
• Images are generated by the combination of an illumination
source and the reflection or absorption of energy from that source

by the elements of the scene being imaged.

e.g. Visible light source illuminates a common everyday 3-D


scene.

• To get a 2-D picture of the scene we can use a camera which


contains light sensors.
•The reflected light from the scene being imaged is focused by
the camera lens.
• Light falls on the sensing material (a CCD array) and produces
electrical signals proportional to the light intensity
• An A/D converter converts this signal to a set of discrete
numbers. (Gray level quantization).
• The illumination may originate from a source of
electromagnetic energy such as radar, infrared, X-ray…
• Scene elements can be familiar objects or may be a molecule,
buried rock formations, human brain…
• Depending on the nature of source, illumination energy is
reflected from or transmitted through objects.
Obtaining a digital Image:

Continuous Sampling and Digital


Image quantization image

• Digitization of the spatial co-ordinates(x,y) is called image


sampling.

• The amplitude digitization is called intensity/ gray-level


quantization
• A digital image is a representation of a two-dimensional image as
a finite set of digital values, called picture elements or pixels.
• Digitization implies that a digital image is an approximation of a
real scene
• A digital image is mathematically represented in the form of a matrix
Example for image sampling

256x256 64x64

(a) (b)

Fig (a) will have 256x256 pixels and fig (b) will be of 64x64 pixels.
In fig(b) the Check board effect can be observed
It can also be observed that the quality of the image depends on the number of
pixels.
Example for intensity quantization
256x256 256x256
256 levels 32 levels

256x256
2 levels
Images Processing Techniques

• Image processing techniques are used to convert raw images to


a suitable form for analysis/ feature extraction.

Commonly used techniques:


• Histogram Analysis
• Thresholding
• Masking
• Edge detection
• Segmentation
• Region Growing
• Modeling
Histogram Analysis
• Histogram is the graphical representation of total number of
pixels of an image at each gray level.
• Histogram gives a rough sense of the density of pixel value
distributed in the image.
• The horizontal axis of the graph represents the pixel value
variations, while the vertical axis represents the number of
pixels in that particular value.

Uses of histogram:
• For thresholding
• Determine noisy gray level
• Adjusting the contrast
Thresholding

• Process of dividing an image into different regions based on


the pixel values.
• A particular pixels vale will be set as threshold and all other
pixels are compared with the threshold. Based on the
comparison (smaller than, greater than or equal to) all pixels
are categorized.
• There can be more than one threshold depends on the
application.

Uses:
• Converting gray scale to binary
• Object detection
• Image segmentation
Edge Detection

• Edges are the region where abrupt change in intensity of pixel


occur
• Usually edges occur at the boundary of two objects.
• Edges makes discontinuity in the brightness or contrast
• Edge detection techniques detect the edges of objects in the image.
• The result of edge detection will be line drawing of the objects.
• The line represents the large change in pixel values.

Common Methods:
1. Gradient Based (First order derivatives)
Sobel Edge detection
Canny edge detection
Robert edge

2. Zero Crossing (Second Derivative)


Example for Edge detection
Work for Monday (5/3/2018)

1. Refer about basic Morphological Operations used in image


processing and write down the details in the notebook.

For Reference:
• Introduction to Robotics (Saeed B Niku)
• Digital Image Processing (Gonzalez and Woods)
Introduction to Kinematics

• The Science of motion analysis without considering the force


acting on the subject.
• It will describe the position and orientation of the robot’s parts
• A universe coordinate system to which everything can be
referenced is used.

U
u n iv e r s e
c o o r d in a te What’s its position
(“reference point”) ?
sy ste m
What’s its orientation ?
Position

• The position of a vector P with respect to a reference to a


universal frame can be written as;

Where ax, by, cz are the three


components of the vector P
Orientation
• The orientation of a body is described by a coordinate system
attached to the body (moving coordinates) relative to the universal
coordinate system.
• Letters n: normal, o: orientation, a: approach
• All the three axis are perpendicular to each other
Representing frame at the origin of fixed reference frame

•The moving coordinate system located at the origin of the universal


frame can be represented as below;
• Each axis of moving frame will be represented with respect to the
three axis of the universal frame

cosines of the angle

• Each element of the matrix is the three dimensional cosines


Representing a frame relative to a fixed reference frame

• The location of a frame relative to the fixed frame is represented


by a vector P

• So the frame is described by one vector of location and its


three vectors of orientation
Eg: A frame is located at 3,5,7 units from origin. Its n-axis is
parallel to x, o-axis is 45o to y and it’s a-axis is 45o to z. Describe
the frame in matrix form.
• A point in space has only three degree of freedom. It can only
move along the three reference axis.
• But a rigid body in space has six degree of freedom. i.e. it can
move along x, y, z axes and also rotate about these three axes.

You might also like