Robotic Arm
Robotic Arm
Robotic Arm
Ramjee Prasad
10804900
Rh6802B54
B.Tech (LEET)
ECE
Forward Kinematics:
As you can see, for each DOF you add the First, there is the very likely possibility of
math gets more complicated, and the joint multiple, sometimes infinite, number of
weights get heavier. You will also see that solutions (as shown below). How would your
arm choose which is optimal, based on
torques, previous arm position, gripping angle, vary between them. When you tell the end
etc.? effector to go from one point to the next, you
have two decisions. Have it follow a straight
line between both points, or tell all the joints to
go as fast as possible - leaving the end effector
to possibly swing wildly between those points.
Motion Planning :
torque
=(mass*distance^2)*(change_in_angular_velo
city / change_in_time)
and
Keep the heaviest components, such as motors,
change_in_angular_velocity as close to the robot arm base as possible. It
=(angular_velocity1)-(angular_velocity0) might be a good idea for the middle arm joint
to be chain/belt driven by a motor located at
angular_velocity the base (to keep the heavy motor on the base
=change_in_angle/change_in_time and off the arm).
where distance is defined as the distance from A robot arm without video sensing is like an
the rotation axis to the center of mass of the artist painting with his eyes closed. Using
arm: basic visual feedback algorithms, a robot arm
could go from point to point on its own
center of mass of the arm = distance = 1/2* without a list of preprogrammed positions.
(arm_length) (use arm mass) Giving the arm a red ball, it could actually
reach for it (visual tracking and servoing). If
but you also need to account for the object the arm can locate a position in X-Y space of
your arm holds: an image, it could then direct the end effector
to go to that same X-Y location (by using
center of mass of the object = distance = inverse kinematics).
arm_length (use object mass)
Haptic sensing is a little different in that there
So then calculate torque for both the arm and is a human in the loop. The human controls the
then again for the object, then add the two robot arm movements remotely. This could be
torques together for the total: done by wearing a special glove, or by
operating a miniature model with position
torque(of_object) + torque(of_arm) = sensors. Robotic arms for amputees are doing
torque(for_motor) a form of haptic sensing. Also to note, some
robot arms have feed back sensors (such as
touch) that gets directed back to the human
Arm Sagging :
(vibrating the glove, locking model joints, The robotic arm has the capability to execute
etc.). both Cartesian and joint motion commands.
Cartesian motion can be expressed as absolute
or relative motion in the world frame or as tool
frame motion. Furthermore straight line or
joint-interpolated motion can be given as
absolute joints angles, as relative joint angles
with respect to the current joint position, or as
timed motion. In order to prevent damaging
robotic hardware in the event of loss
communication between the ACF and control
Tactile sensing (sensing by touch) usually
software, a sequence of via points in computed
involves force feedback sensors and current
sensors. These sensors detect collisions by for any motion command and only after the
detecting unexpected force/current spikes, arm has nearly reached the current via points is
meaning a collision has occurred. A robot end the next via point sent to the ACE. Thus
effector can detect a successful grasp, and not trajectory is generated as a sequence of points
grasp too tight or too lightly, just by measuring in space only and not in time. The one
force. Another method would be to use exception is the timed joint motion commands
current limiters - sudden large current draws
in which the operator can command each joint
generally mean a collision/contact has
occurred. An arm could also adjust end to move for a specified amount of time in
effector velocity by knowing if it is carrying a which case the generated trajectory is a
heavy object or a light object - perhaps even sequence of via points in time only and not in
identify the object by its weight. space.
It is critical that robotic arm operate safely The most essential robot peripheral is the end
during the execution of its assigned tasks so as effector, or end-of-arm-tooling.
not to damage itself or other hardware. Each Common examples of end effectors
time through the control loop, sensor data is include welding devices (such as
analyzed and assessment made as to whether MIG-welding guns, spot-welders,
any hardware failures have occurred. etc.), spray guns and also grinding and
Available sensor data include joint positions deburring devices (such as pneumatic
from both encoders pot voltages, motor disk or belt grinders, burrs, etc.), and
currents, joint temperature, power supply grippers (devices that can grasp an
status, and A/D reference voltages. Potential object, usually electromechanical or
hardware faults include failures of the sensors, pneumatic). Another common means
motors, power supply, or voltage reference. of picking up an object is by vacuum.
the positions of the joints as determined from End effectors are frequently highly
both encoders and pot voltages is also assessed complex, made to match the handled
and if the difference exceeds a specified product and often capable of picking
limit ,the arm is recalibrated , during normal up an array of products at one time.
operation the encoders are used to as the They may utilize various sensors to
primary joint-position sensor with some aid the robot system in locating,
degradation of positioning accuracy. handling, and positioning products.
Introduction:
Arm movements at stationary or moving paths become straighter and they show signs
targets are common in the motor repertoire of of external force manipulation. Infant
humans, but little is known on how the brain movements become more economical and
uses spatiovisual information concerning the muscles are activated only when they are
locations of targets for the generation of arm needed. At 24 to 36 months of age, infants
movements and how it controls the different demonstrate adult-like capability in reaching
neural and muscular structures involved during for stationary and moving targets (Konczak
the formation of arm trajectories. Newborn 2004). Many of the problems associated with
babies possess biological networks that are the planning and execution of human arm
initally capable of performing only reflex trajectories are illuminated by control
actions. As infants learn about their strategies which have been developed for
surroundings and begin to comprehend their robotic manipulators. Since artificial and
senses, their biological networks complexify biological motor control systems often face the
allowing them to perform more intelligent same problems, the solutions to these
motor control tasks. Using neuroevolution it is problems are also the same (Hollerbach 1985).
possible to evolve networks that can simulate This paper presents results on arm positioning
infant motor development. Neuroevolution and target tracking for a simulated robot arm,
was used to evolve controllers for a simulated that parallel results from arm positioning and
robot arm that can position the arm’s end- target tracking experiments that were carried
effector close to a stationary target and track out on infants (Jansen-Osmann et al. 2002; von
moving targets. Hofsten et al. 1998). This paper models two
aspects of infant development using
In order to successfully control an arm, a neurocontrollers: 1) infants’ ability to
neurocontroller must satisfy two conditions: 1) generalize which enables them to perform
A neural controller must be able to sucessfully reaching tasks that they have not previously
interact with its arm in order to execute performed, and 2) infants ability to extrapolate
centrally planned complex actions. 2) Visually paths of moving objects in order to track them.
specified goals must be linked with The next section describes learning algorithms
appropriate motor acts, these motor acts must that have been applied to robot control tasks.
be able to move the arm to the desired goal Section 3 describes the Simderella robot arm
(Konczak 2004). Newborn babies cannot simulator and the NeuroEvolution of
peform these processes for many reasons: they Augmenting Topologies (NEAT) genetic
have limited knowledge about the physical algorithm.
makeup of their bodies, their limited motor Related Work:
repertoire consists only of reflex actions, they
have limited visual capabilities, and they do Robot arm control using distance sensors is a
not possess complex neural control complex task that requires mapping the
mechanisms. Despite these limitations, babies target’s location relative to the endeffector to
as young as one week will attempt small arm joint movements that position the arm near the
movements directed to a target. Even though target. It is difficult to specify such a mapping
these early arm movements are unpredictable, by hand, so researchers have applied various
they are not the result of random activity or machine learning techniques to learn successul
purely reflex actions (Trevarthen 1980). As control strategies.
soon as infants begin catching stationary
objects successfully, they also begin to catch Supervised learning methods have been
moving objects. According to Von Hofsten, at applied to robot control tasks (van der Smagt
18 weeks babies are able to perform 1995). Supervised learning requires training
anticipatory arm movements when trying to examples that demonstrate the correct
intercept a moving target; these interceptive mapping from input to output. During training
actions are triggered by the presence of a the network is present with input from the
target in the infant’s field of view (von training set, and the output is compared to the
Hofsten 1980). At about three months of age, desired output. Errors are calculated according
infants reach consistently for targets in their to differences, and modifications are made to
surroundings and rarely miss their targets the network’s weights using backpropagation.
(Konczak 2004). Kinematically their hand One limitation of this approach is that
generating training examples for complex reinforcement learning methods (Moriarty and
tasks can be difficult. Miikkulainen 1996). This paper uses the
NEAT genetic algorithm to evolve robot
Exploratory methods involve providing the controllers. When adult humans perform goal-
robot with a set of exploratory behaviors. The directed arm movements under the influence
robot learns affordances during a behavioral of an external force, they learn to adapt to the
babbling stage where the robot randomly external force. After removing the external
chooses different exploratory behaviors, force field, they reveal kinematic after-effects
applies them to objects and detects sensor that are indicative of a neural controller that
invariants (Stoytchev 2005). A shortcoming of still compensates for the external force. This
this approach is that there are affordances that behaviour suggests that humans use a neural
cannot be discovered because the robot does representation of the inverse arm dynamics
not possess the required exploratory behavior. required to control their arms (Konczak 2004).
Both supervised learning methods and This paper proposes that the neural
expoloratory approaches require human input representation is a biological neural network,
which can be cumbersome. Supervised which can be modified based on the task the
learning methods try to model how infants are human is trying to acheive. For example, when
taught by demonstration while exploratory humans perform reaching tasks under an
methods model how infants use a subset of external force field, they learn to move their
their behaviorial repertoire to perform a task. arms while applying a force opposite to that of
Using supervised learning, the controller the external force field. This is possible
learns behaviors very similar to those in the because biological neural networks can be
training set. In exploratory learning, the modified to perform a task under different
controllers learn behaviors that are a conditions. This paper tries to model the neural
combination of the exploratory behaviors that inverse dynamics model of the brain by
it was provided with. Both supervised and evolving robot arm neurocontrollers using the
exploratory learning cannot properly model NEAT genetic algorithm. NEAT does not
learning completely new behaviors. Learning require
new behaviors can be modelled by using
reinforcement learning or genetic algorithms,
which are able to determine the effectiveness
of behaviors required to perform a task. In
reinforcement learning, agents learn from a
signal that provides some measure of
performance which can be provided after a
sequence of joint movements are made. As
reinforcement signals take into account several
control decisions at once, appropriate credit
can be assigned to intermediate joint
movements that are neccessary to reach the
final target position.