0% found this document useful (0 votes)
5 views104 pages

Robotics

The document discusses various types of sensors used in robotics, including light, sound, temperature, touch, proximity, IR, ultrasonic, pressure, and distance sensors, each serving specific functions to help robots interact with their environment. It also covers machine vision systems, detailing the processes of image acquisition, processing, and interpretation for applications in automated inspection and guidance. Additionally, the document explains motion interpolation techniques and the coordination of robot actions through signal and wait commands in workcell control.

Uploaded by

srujangowda577
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views104 pages

Robotics

The document discusses various types of sensors used in robotics, including light, sound, temperature, touch, proximity, IR, ultrasonic, pressure, and distance sensors, each serving specific functions to help robots interact with their environment. It also covers machine vision systems, detailing the processes of image acquisition, processing, and interpretation for applications in automated inspection and guidance. Additionally, the document explains motion interpolation techniques and the coordination of robot actions through signal and wait commands in workcell control.

Uploaded by

srujangowda577
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Module – 2

Robot Sensing & Vision:


Sensors
What is Sensor?
A device that gives an output by detecting the changes in quantities or events can be defined as a sensor.
In general, sensors are termed as the devices that generate an electrical signal or optical output signal corresponding to
the variations in the level of inputs. There are different types of sensors, for example, consider a thermocouple which
can be considered as a temperature sensor that produces an output voltage based on the input temperature changes.

ROBOT SENSORS
A Robot Sensor is used to measure the condition of the robot and its surrounding environment. Sensors pass electronic
signals to robots for executing desired tasks. Robots need suitable sensors that help them control themselves.
Types of Sensors
• Light sensors
• Sound sensors
• Temperature Sensor
• Touch Sensor
• Proximity Sensors
• IR Sensor
• Ultrasonic Sensor
• Pressure Sensor
• Level Sensors
• Smoke and Gas Sensors
Light Sensors:

Light sensors are used to identify the light and generate a voltage difference. There are two
types of light sensors used in robot parts-
Photoresistors and photovoltaic cells
•Photoresistors change their resistance by changing light intensities. More light on it results in
less resistance and vice versa. These are very budget-friendly and can be implemented in robots
easily.
•Photovoltaic cells can convert the energy of solar radiations into electrical energy. These are
used in manufacturing solar robots.
Sound Sensors:
This sensor is used to recognize a sound and convert it into an electrical signal. It is used in simple robots that
can navigate with the help of sound. How about a robot turning right on a single clap and left on two claps? But,
the implementation of sound sensors is not as easy as that of light sensors. The voltage difference created by a
sound is minimal and must be intensified to make a measurable change.

Temperature Sensors:
Temperature sensors are widely used in robots working in extreme weather conditions, like a
desert or an ice glacier. These sensors help robots adapt to the ambient temperature. Tiny
sensor ICs produce voltage differences to adjust to temperature changes. Temperature
sensors are used these days extensively
Contact sensors

Contact sensors require physical contact to function. This creates a trigger for the robot to act accordingly. A contact
sensor is used in a limit switch, button switch, or tactile bumper switch. They are widely used for avoiding obstacles. When
these sensor switches touch an obstacle, it commands the robot to perform tasks like turning, reversing, or simply
stopping.

The capacitive sensors are made to react to human touch. A simple example of this is the touch screen of a smartphone.
There are different types of touch sensors that are classified based on the type of touches such as capacitance touch
switch, resistance touch switch, and piezo touch switch.
Proximity Sensors:
Proximity sensors can detect the presence of an object within predefined distances without any physical contact. They utilize magnetic fields
to detect such objects. There is a wide range of proximity sensors available in the market. Let us learn about the popular ones.

• IR Transceivers:
The IR transmits a beam LED in these sensors, and the light reflects if interrupted by an obstacle. The receiver captures
this. These sensors are also used to measure distances.
• Ultrasonic:
They create sound waves of high frequency. An echo confirms the presence of an obstacle.
• Photoresistor:
Widely used as light sensors, photo resistors can also be used as proximity sensors due to their features. The amount of
light generated differs when it comes into association with an obstacle in proximity.
IR Sensor
 The small photo chips having a photocell which are used to emit and detect the infrared light
are called as IR sensors. IR sensors are generally used for designing remote control
technology.
 IR sensors can be used for detecting obstacles of the robotic vehicle and thus control the
direction of the robotic vehicle. There are different types of sensors that can be used for
detecting infrared lights.
 Based on the commands received by the IR receiver interfaced to the microcontroller at the
receiver end. The microcontroller generates appropriate signals to drive the motors such that
to control the direction of the robotic vehicle in forward or backward or left or right.
Ultrasonic Sensor
• A transducer that works on the principle similar to the sonar or radar and estimate attributes of the
target by interpreting is called an ultrasonic sensors or transceivers.

• The high-frequency sound waves generated by active ultrasonic sensors are received back by the
ultrasonic sensor for evaluating the echo. Thus, the time interval taken for transmitting and receiving
the echo is used for determining the distance to an object. Ultrasonic sensors can be used for
measuring the distance of an object.
• The waves transmitted from the ultrasonic transmitter are reflected back to the ultrasonic receiver
from the object. The time taken for sending and receiving back these waves is calculated by using
the velocity of sound.
Distance Sensors:
Most of the proximity sensors are extensively used as distance sensors. These are also commonly
referred to as Range Sensors. The IR and ultrasonics are great assets to calculate distances
accurately.
Pressure Sensors:
They are widely used to quantify pressure.
Tactile sensor is a robot sensor that is used to measure force and pressure with the help of touch. It
is used to determine the grip strength of a robot arm and the pressure it requires to hold an object.
Temperature Sensor

• There are different types of temperature sensors that can measure


temperature, such as a thermocouple, thermistors, semiconductor
temperature sensors, resistance temperature detectors (RTDs), and so on.

• A simple temperature sensor with the circuit can be used for switching on
or off the load at a specific temperature which is detected by the
temperature sensor
• it is designed, that is used for controlling the temperature of any device
based on the requirement of industrial applications.
Classification of Robotic sensors
Uses of Sensors in Robotics

 Safety Monitoring
 Interlocks in work cell control
 Part inspection for quality control
 Determining position and related information
about objects in the robot cell
Machine Vision System:
Machine vision consists of the acquisition of image data, followed by the processing and
interpretation of these data by computer for some industrial application. Machine vision
is a growing technology, with its principal applications in automated inspection and robot
guidance.

The operation of a machine vision system can be divided into the following three
functions:
(1) image acquisition and digitization,
(2) image processing and analysis
(3) interpretation
These functions and their relationships are illustrated schematically in Figure
Image acquisition and digitization is accomplished using a digital camera and a digitizing
system to store the image data for subsequent analysis.

The camera is focused on the subject of interest, and an image is obtained by dividing
the viewing area into a matrix of discrete picture elements (called pixels), in which each
element has a value that is proportional to the light intensity of that portion of the
scene. The intensity value for each pixel is converted into its equivalent digital value by
an ADC (analog-to-digital converter,). The operation of viewing a scene consisting of a
simple object that contrasts substantially with its background, and dividing the scene
into a corresponding matrix of picture elements, is depicted in Figure
Illumination. Another important aspect of machine vision is illumination. The scene viewed by the vision
camera must be well illuminated, and the illumination must be constant over time. This almost always requires
that special lighting be installed for a machine vision application rather than relying on ambient light in the
facility. Five categories of lighting can be distinguished for machine vision applications, as depicted in Figure
22.11: (a) front lighting, (b) back lighting, (c) side lighting, (d) structured lighting, and (e) strobe lighting.
Image Processing and Analysis
The second function in the operation of a machine vision system is image processing and
analysis.
One category of techniques in image processing and analysis, called segmentation, is
intended to define and separate regions of interest within the image.

Two of the common segmentation techniques are


• thresholding and
• edge detection.
• Thresholding involves the conversion of each pixel intensity level into a
binary value, representing either white or black. This is done by comparing
the intensity value of each pixel with a defined threshold value. If the pixel
value is greater than the threshold, it is given the binary bit value of white,
say 1; if less than the defined threshold, then it is given the bit value of black,
say 0. Reducing the image to binary form by means of thresholding usually
simplifies the subsequent problem of defining and identifying objects in the
image.
• Edge detection is concerned with determining the location of
boundaries between an object and its surroundings in an image. This
is accomplished by identifying the contrast in light intensity that exists
between adjacent pixels at the borders of the object. A number of
software algorithms have been developed for following the border
around the object.
• Another set of techniques in image processing and analysis that normally follows
segmentation is feature extraction. Most machine vision systems characterize an object
in the image by means of the object’s features: its area, length, width, diameter,
perimeter, center of gravity, and aspect ratio. Feature extraction methods are designed
to determine these features based on the area and boundaries of the object For
example, the area of the object can be determined by counting the number of pixels
that make up the object and multiplying by the area represented by one pixel. Its length
can be found by measuring the distance (in terms of pixels) between the two extreme
opposite edges of the part.
Interpretation
• For any given application, the image must be interpreted based on the extracted features. The
interpretation function is usually concerned with recognizing the object, a task called object
recognition
• The objective in this task is to identify the object in the image by comparing it with predefined
models or standard values
• Two commonly used interpretation techniques are template matching and feature weighting.
1. Template matching
2. Feature weighting

Template matching refers to various methods that attempt to compare one or more features of an
image with the corresponding features of a model or template stored in computer memory.

Feature weighting is a technique in which several features (e.g., area, length, and perimeter) are
combined into a single measure by assigning a weight to each feature according to its relative
importance in identifying the object. The score of the object in the image is compared with the score
of an ideal object residing in computer memory to achieve proper identification
Machine Vision Applications
• The reason for interpreting the image is to accomplish some application.
Machine vision applications in manufacturing divide into three categories:
(1) inspection,
(2) identification, and
(3) visual guidance and control.
Typical industrial inspection tasks include the following:
• Dimensional measurement.
• Dimensional gaging.
• Verification of the presence of components.
• Verification of geometrical features of an object (hole location and number of
holes)
• MOTION INTERPOLATION

• In joint interpolation, the controller determines how far each joint


must move to get from the first point defined in the program to the
next. It then selects the joint that requires the longest time. This
determines the time it will take to complete the move (at a specified
speed). Based on the known move time, and the amount of the
movement required for the other axes, the controller subdivides the
move into smaller increments so that all joints start and stop their
motions at the same time. Consider, for example, the move from
point 1, 1 to point 7, 4 in the grid of Fig.
• Linear joint 1 must move six increments (grid locations) and joint 2
must move three increments. To determine the joint interpolated
path, the controller would determine a set of intermediate
addressable points along the path between 1, 1 and 7, 4 which would
be followed by the robot. The following program illustrates the
process
• The reader should note that the controller alternatively moves both
axes, and just one axis. Also, for each move requiring actuation of
both axes, the two axes start and stop together. This kind of actuation
causes the robot to take path as illustrated in Fig. The controller does
the equivalent of constructing a hypothetically perfect path between
the two points specified in the program, and then generates the
internal points as close to that line a possible. The resulting path is
not a straight line, but is rather an approximation. The controller
approximates the perfect path as best it can with the limitations
imposed by the control resolution of the robot (the available
addressable points in the work volume).
• In our case, with only 64 addressable points in the grid, the
approximation is very rough. With a much larger number of
addressable points and a denser grid, the approximation would be
better.
• On many robots, the programmer can specify which type of
interpolates scheme to use. The possibilities include:

• 1. Joint interpolation

• 2. Straight line interpolation

• 3. Circular interpolation

• 4. Irregular smooth motions


• For many commercially available robots, joint interpolation is the
default procedure that is used by the controller. That is, the controller
will follow a joint interpolated path between two points unless the
programmer specifies straight line (or some other type of)
interpolation.
• Circular interpolation requires the programmer to define a circle in
the robot's workspace. This is most conveniently done by specifying
three points that lie along the circle. The controller then constructs an
approximation the circle by selecting a series of addressable points
that lie closest to the defined circle.
In manual leadthrough programming, when the programmer moves
the manipulator wrist to teach spray painting or are welding, the
movements typically consist of combinations of smooth motion
segments. These segments are sometimes approximately straight,
sometimes curved (but not necessarily circular), and sometimes back-
and-forth motions.
• WAIT, SIGNAL, AND DELAY COMMANDS

• Robots usually work with something in their work space. In the


simplest case, drop off during execution of its work cycle. In more
complex cases, the robot will work with other pieces of equipment in
the workcell, and the activities of the various equipment must be
coordinated.
• Nearly all industrial robots can be instructed to send signals or wait
for signals during execution of the program. These signals are
sometimes called interlocks and their various applications in workcell
control. The most common form of interlock signal is to actuate the
robot's end effectors. In the case of a gripper, the signal is to open or
close the gripper. Signals of this type are usually binary; that is, the
signal is on-off or high-level-low-level.
• In addition to control of the gripper, robots are typically coordinated
with other devices in the cell also. For example, let us consider a
robot whose task is to unload a press. It is important to inhibit the
robot from having its gripper enter the press before the press is open,
and even more obvious, it is important that the robot remove its
hand from the press before the press closes.
• To accomplish this coordination, we introduce two commands that
can be used during the program. The first command is
• SIGNAL M
• which instructs the robot controller to output a signal through line M
(where M is one of several output lines available to the controller).
The second command is
• WAIT N
• Which indicates that the robot should wait at its current location until
receives a signal on line N (where N is one of several input lines
available to the robot controller).
• Let us suppose that the two-axis robot of Fig. is to be used to perform
the unloading of the press in our example. The layout of the workcell
in illustrated in Fig. 8-9. The platten of the press (where the parts are
to be picked up) is located at 8, 8. The robot must drop the parts in a
tote pan located at 1, 8. One of the columns of the press is in the way
of an easy straight line move from 8, 8 to 1, 8. Therefore, the robot
must move its arm around the near side of the column in order to
avoid colliding with it.
• The operation of the gripper was assumed to take place
instantaneously so that its actuation would be completed before the
next step in the program was started. Some grippers use a feedback
loop to ensure that the actuation has occurred before the program is
permitted to execute the next step. A WAIT instruction can be
programmed to accomplish this feedback.
• An alternative way to address this problem is to cause the robot to
delay before proceeding to the next step. In this case the robot would
be programmed to wait for a specified amount of time to ensure that
the operation had taken place. The form of the command for this
second case has a length of time as its argument rather than an input
line.
• DELAY X SEC
• The command indicates that the robot should wait X seconds before
proceeding to the next step in the program. Below, we show a
modified version of Example 8-5, using time as the means for assuring
that the gripper is either opened or closed.
• The reader is cautioned that our programs above are written to look
like computer programs. This is for convenience in our explanation of
the programming principles. The actual teaching of the moves and
signals is accomplished by leading the arm through the motion path
and entering the nonmotion instructions at the control panel or with
the teach pendant. In the majority of industrial applications today,
robots are programmed using one of the lead through methods. Only
with the textual language programming do the programs read like
computer program listings.
• BRANCHING

• Most controllers for industrial robots provide a method of dividing a


program into one or more branches. Branching allows the robot
program subdivided into convenient segments that can be executed
during the program. A branch can be thought of as a subroutine that
is called one or more times during the program.
• Most controllers allow the user to specify whether the signal should
interrupt the program branch currently being executed, or wait until
the current branch completes. The interrupt capability is typically
used for error branches. An error branch is invoked when an incoming
signal indicates that some abnormal event (e.g., an unsafe operating
condition) has occurred.
• A frequent use of the branch capability is when the robot has been
programmed to perform more than one task. In this case, separate
branches are used for each individual task. Some means must be
devised for indicating which branch of the program must be executed
and when it must be executed
• A common way of accomplishing this is to make use of external
signals which are activated by sensors or other interlocks. The device
recognizes which task must be performed, and provides the
appropriate signal to call that branch. This method is frequently used
on spray painting robots which have been programmed to paint a
limited variety of parts moving past the workstation of a conveyor.
• Photoelectric cells are frequently employed to identify the part to be
sprayed by distinguishing between the geometric features (e.g., size,
shape. the presence of holes, etc.) of the different parts. The
photoelectric cells are used to generate the signal to the robot to call
the spray painting subroutine corresponding to the particular part.
• Robot Language Structure
• The language must be designed to operate with a robot system as
illustrated in Fig. 9-1. It must be able to support the programming of
the robot, control of the robot manipulator, and interfacing with
peripherals in the work cell (e.g., sensors, and equipment). It should
also support data communications with other computer systems in
the factory.
• Operating Systems
• In using the textual languages, the programmer has available a CRT
monitor, an alphanumeric keyboard, and a teach pendant. There
should also be some means of storing the programs, either on
magnetic tape or disk. Using language requires that there be some
mechanism that permits the user to determine whether to write a
new program, edit an existing program, execute (run) a program, or
perform some other function.
• This mechanism is called an operating system, a term used in
computer science to describe the software that supports the internal
operation of the computer system. The purpose of the operating
system is to facilitate the operation of the computer by the user and
to maximize the performance and efficiency of the system and
associated peripheral devices. The definition and purpose of the
operating system for a robot language are similar.
• A robot language operating system contains the following three basic
modes of operation:

• 1. Monitor mode

• 2. Run mode

• 3. Edit mode
• The monitor mode is used to accomplish overall supervisory control
of the system. It is sometimes referred to as the supervisory mode. In
this mode, the user can define locations in space using the teach
pendant, set the speed control for the robot, store programs, transfer
programs from storage back into control memory, or move back and
forth between the modes of operation such as edit or run.
• The run mode is used for executing a robot program. In other words,
the robot is performing the sequence of instructions in the program
during the run mode. When testing a new program in the run mode,
the user can typically employ debugging procedures built into the
language to help in developing a correct program.
• The edit mode provides an instruction set that allows the user to
write new programs or to edit existing programs. Although the
operation of the editing mode differs from one language system to
another, the kinds of editing operations that can be performed
include the writing of new lines of instructions in sequence, deleting
or making changes to existing instructions, and inserting new lines in
a program.
• VAL is an example of a robot language that is processed by an
interpreter. A compiler is a program in the operating system that
passes through the entire source program and pretranslates all of the
instructions into machine level code that can be read and executed by
the robot controller. MCL is an example of a robot language that is
processed by a compiler. Compiled programs usually result in faster
execution times. On the other hand, a source program that is
processed by an interpreter can be edited more readily since
recompilation of the entire program is not required.
• MOTION COMMANDS

• Among the most important functions in a robot language are those


which control the movement of the manipulator arm. This section
describes how the textual languages accomplish these functions.
• MOVE and Related Statements

• One of the most important functions of the language, and the


principal feature that distinguishes robot languages from computer
programming languages, is manipulator motion control.
• MOVE A1

• This causes the end of the arm (end effector) to move from its
present position to the point (previously defined), named A1, and so
A1 defines the position and orientation of the end effector. This
MOVE statement generally causes the arm to move with a joint-
interpolated motion. There are variations on the MOVE statement.
For example, the VAL II language provides for a straight line move
with the statement:
• MOVES A1

• The suffix S stands for straight line interpolation. The controller


computes a straight line trajectory from the current position to the
point A1 and causes the robot arm to follow that trajectory.
• In some cases, the trajectory must be controlled so that the end
effector passes through some intermediate point as it moves from the
present position to the next point defined in the statement. This
intermediate point is referred to as a via point. The need for the via
point arises in applications in which there are obstacles and
clearances to be considered along the motion path For example, in
removing a part from a production machine, the arm trajectory would
have to be planned so that no interference occurs with the machine
The move statement for this situation might read like the following:
• MOVE A1 VIA A2

• This command tells the robot to move its arm to point A1, but to pass
through via point A2 in making the move.
• A related move sequence involves an approach to a point and
departure from the point. The situation arises in many material-
handling applications, in which it is necessary for the gripper to be
moved to some intermediate location above the part before
proceeding to it for the pickup. This is what is called an approach, and
the robot languages permit this motion sequence to be done in
several different ways. We will use VAL II to illustrate. Suppose the
robot's task is to pick up a part from a container. We assume that the
gripper is initially open. The following sequence might be used:
• APPRO A1, 50

• MOVES A1

• SIGNAL (to close gripper)

• DEPART 50
• The APPRO command causes the end effector to be moved to the
vicinity of point A1, but offset from the point along the tool z axis in
the negative direction (above the part) by a distance of 50 mm. From
this location the end effector is moved straight to the point A1 and
closes its gripper around the part. The DEPART statement causes the
robot to move away from the pickup point along the tool z axis to a
distance of 50 mm. The provision is available in VAL II for the APPRO
and DEPART statements to be performed using straight line
interpolation rather than joint interpolation. These commands are
APPROS and DEPARTS, respectively.
• In addition to absolute moves to a defined point in the workspace,
incremental moves are also available to the programmer. The
following examples from AML illustrate the possibilities:
• DMOVE (1, 10)

• DMOVE ((4,5,6), (30,60,90))

• DMOVE is the command for an incremental move. In parenthesis the


joint and the distance of the incremental move are specified. The first
example moves joint 1 (assumed to be a linear joint) by 10 in. The
second example commands an incremental move of axes 4, 5 and 6
by 30°, 60°, and 90°, respectively.
• In the AL language, which is designed for multiple arm control, the
MOVE statement can be used to identify which arm is to be moved.
Robots of the future might possess more than a single arm, and we
present the AL statement to illustrate how this might be done.

• MOVE ARM2 ΤΟ Α1

• The robot is instructed to move its arm number 2 from the current
position to point A1.
• SPEED Control

• The SPEED command is used to define the velocity with which the
robot's arm is moved. When the SPEED command is given in the
monitor mode (preparatory to executing a program), it indicates some
absolute measure of velocity available for the robot. This might be
specified as
• SPEED 60 IPS
• which indicates that the speed of the end effector during program
execution shall be 60 in./sec unless it is altered to some other value
during the program. If no units are given, the speed command usually
indicates some value relative to the robot designer's concept of
"normal" speed. For instance,

• SPEED 75

• Indicates that the robot should operate at 75 percent of normal


speed during program execution (unless altered during the program).
• END EFFECTOR AND SENSOR COMMANDS

• End Effector Operation


• One of the uses of the SIGNAL commands in the previous chapter was
to operate the gripper: SIGNAL 5 to close the gripper and SIGNAL 6 to
open the gripper. In most robot languages, there are better ways of
exercising control over the end effector operation. The most
elementary commands are
• OPEN and CLOSE

• VAL II distinguishes between differences in the timing of the gripper


action. The two commands OPEN and CLOSE cause the action to
occur during execution of the next motion, while the statements

• OPENI and CLOSEI


• Cause the action to occur immediately, without waiting for the next
motion to begin. This latter case results in a small time delay which
can be defined by a parameter setting in VAL II.
• CLOSE 40 MM or CLOSE 1.575 IN

• when applied to a gripper that has servocontrol over the width of the
finger opening would close the gripper to an opening of 40mm (
commands would control the opening of the gripper. (1.575in.).
• CLOSE 3.0 LB

• The command indicates the type of command that might be used to


apply a 3-lb force against the part.

• CENTER

• The CENTER statement allows the robot to center its arm around the
object rather than causing the object to be moved by the gripper
closure.
• OPERATE TOOL (SPEED = 125 RPM)
• OPERATE TOOL (TORQUE = 5 IN LB)
• OPERATE TOOL (TIME = 10 SEC)

• We are assuming a powered rotational tool such as a powered


screwdriver. All three statements apply to the operation. However,
the first two statements are mutually exclusive; either the tool can be
operated at 125 r/min or it can be operated with a torque of 5 in.-lb.
The third statement indicates that after 10 sec the operation will
terminate.
• Sensor Operation
Let us consider some additional control features of the SIGNAL, WAIT,
and similar statements beyond those described. The SIGNAL
command can be used both for turning on or off an output signal. The
statements
SIGNAL 3, ON
.
.
.
SIGNAL 3, OFF
Would allow the signal from output port 3 to be turned on at one
point in the program and turned off at another point in the program.
• SIGNAL 105, 4.5

• This would provide an output of 4.5 units (probably volts) within the
allowable range of the output signal.
• The relevant commands are as follows:
SIGNAL 5, ON Robot turns on the device
WAIT 15, ON Device signals back that it is on
.
.
.
SIGNAL 5, OFF Robot turns off the device
WAIT 15, OFF Device signals back that it is off
The WAIT statement can be used for analog signals as well as binary digital
signals in the same manner as the SIGNAL command.
• The variables could be defined as follows
DEFINE MOTOR1 = OUTPORT 5
DEFINE SENSR3= INPORT 15
• This would permit the preceding input output statements to be written in
the following way
SIGNAL MOTOR1, ON
WAIT SENSR3, ON
.
.
.
SIGNAL MOTOR1, OFF
WAIT SENSR3, OFF
It is also possible to define an analog signal, either input or output, as a
variable that is used during program execution.
• COMPUTATIONS AND OPERATIONS

• The need arises in many robot programs to perform arithmetic


computations and other types of operations on constants, variables,
and other data objects. The standard set of mathematical operators
in second generation languages are
+ addition
- subtraction
* multiplication
/division
**exponentiation
=equal to

Precedence rules are established that evaluate an expression from left


to right, with parentheses used to indicate that expressions within
parentheses should be evaluated first.
• Some of the languages also have the capability to calculate the
common trigonometric, logarithmic, exponential, and similar
functions. The following is a list of these functions that we can make
use of in some of the problem exercises:

SIN(A) Sine of an angle A


COS(A) Cosine of an angle A
TAN(A) Tangent of an angle A
COT(A) Cotangent of an angle A
ASIN(A) Arc sine of an angle A
ACOS(A) Arc cosine of an angle A
ATAN(A) Arc tangent of an angle A
ACOT(A) Arc cotangent of an angle A
LOG(X) Natural logarithm of X
EXP(X) Exponential function e**X
ABS(X) Absolute value of X
INT(X) Largest integer less than or = X
SQRT(X) Square root of X
• In addition to the arithmetic and trigonometric operators, relation
operators are also used to evaluate and compare expressions. The
common relational operators are listed below

• EQ Equal to
• NE Not equal to
• GT Greater than
• GE Greater than or equal to
• LT Less than
• LE Less than or equal to
• Example:

• This program is for the same palletizing operation as in Examples 8-7


and 8-8 of the previous chapter. To review, the robot must pick up
parts from an incoming chute and deposit them onto a pallet. The
pallet has four rows that are 50 mm apart and six columns that are
40mm apart. The plane of the pallet is assumed to be parallel to the
xy plane. The rows of the pallet are parallel to the x axis and the
columns of the pallet are parallel to the y axis. Figure 9-2 shows the
arrangement of the 1.0 in.). We will use the following constants and
variables in our program:
• Variables:

• ROW The row number (integer value)


• COLUMN The column number (integer value)
•X An x-coordinate value
•Y An y-coordinate value
• Location constants:

• PICKUP The pickup point on the chute


• CORNER The corner starting point on the pallet

• Location variables:

• DROP The dropoff point


• The program to perform the palletizing operation is as follows (we
provide commentary about some of the statements in the right
margin)
PROGRAM PALLETIZE
DEFINE PICKUP= JOINTS (1,2,3,4,5)
DEFINE CORNER = JOINTS (1,2,3,4,5)
DEFINE DROP= COORDINATES(X,Y)
OPENI
ROW = 0 Initialize ROW
10 Y=ROW* 50.0 Compute y for dropoff point
COLUMN0 Initialize COLUMN
20 X= COLUMN* 40.0 Compute x for dropoff point
DROP= CORNER + (X,Y) Define DROP for each iteration
APPRO PICKUP, 50 Pickup sequences
MOVES PICKUP Pickup sequence
CLOSEI Pickup sequence
DEPART 50 Pickup sequence
APPRO DROP, 50 Dropoff sequence
MOVES DROP Dropoff sequence
OPENI Dropoff sequence
DEPART 50 Dropoff sequence
COLUMN= COLUMN+1 Increment COLUMN variable
IF COLUMN LT 6 GOTO 20 Check if COLUMN limit reached
ROW=ROW +1 Increment ROW variable
If ROW LT 4 GOTO 10 Check if ROW limit reached
END PROGRAM

You might also like