0% found this document useful (0 votes)
35 views7 pages

Robotics Exam Pass

Uploaded by

r.kubal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views7 pages

Robotics Exam Pass

Uploaded by

r.kubal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

delay

Example of loading unloading


2.7 Robot Sensors
Sensors are devices for sensing and measuring geometric and physical properties of robots and the
surrounding environment as follows – Position, orientation, velocity, acceleration – Distance, size – Force, moment –
Temperature, luminance, weight
2.7.1 Desirable features of Sensors:
1. Accuracy
Accuracy should be high. How close output to the true value is the accuracy of the
device.
2. Precision
There should not be any variations in the sensed output over a period of time precisionof
the sensor should be high.
3. Operating Range
Sensor should have wide range of operation and should be accurate and precise over this
entire range.
4. Speed of Response
Should be capable of responding to the changes in the sensed variable in minimum time.
5. Calibration
Sensor should be easy to calibrate time and trouble required to calibrate should be
minimum. It should not require frequent recalibration.
6. Reliability
It should have high reliability. Frequent failure should not happen.
7. Cost and Ease of operation
Cost should be as low as possible, installation, operation and maintenance should be easy and should not required skilled or
highly trained persons.
Examples of Sensors:

Potentiometers
Thermocouples, thermistors.
Strain gauge
Load cell
Infrared sensors
LVDT
Pyrometers
Pizeo electric devices
Pressure Transducers
Vision and voice sensors.
There are generally two categories of sensors used in robotics; these are for internal purposes, and those for external
purposes. Internal sensors are used to monitor and control the various joints of the robot; they form a feedback control loop
with the robot controller. Examples of internal sensors include potentiometers and optical encoders, while tachometers of
various types can be deployed to control the speed of the robot arm. External sensors are external to the robot itself, and are
used when we wish to control the operations of the robotwith other pieces of equipment in the robotic work cell. External
sensors can be relatively simple devices, such as limit switches that determine whether a part has been positioned properly, or
whether a part is ready to be picked up from an unloading bay. In order to perform the task in a satisfactory way these senses
and capability includes vision and hand eye coordination, touch, hearing accordingly we will dived the types of sensors used in
robotics into the following three categories.
Robotic Sensors
A number of advanced sensor technologies may also be used; these are outlined in Table 1.
Tactile sensors
Used to determine whether contact is made between sensor and another object. Two types: touch sensors—which indicate
when contact is made, and no more; and force sensors—which indicate the magnitude of the force with the object.
Proximitysensors
Used to determine how close an object is to the sensor. Also called a range sensor.
Optical sensors Photocells and other photometric devices that are used to detect the presence or absence of objects. Often
used in conjunction to proximity sensors.
Machinevision
Used in robotics for inspection, parts identification, guidance, and other uses.
Others Miscellaneous category of sensors may also be used; including devices for measuring: temperature, fluid pressure, fluid
flow, electrical voltage, current, and other physical properties.
2.7.2 Range sensor:
Ranging sensors include sensors that require no physical contact with the object being detected. They allow a robot to see an
obstacle without actually having to come into contact with it. This can prevent possible entanglement, allow for better obstacle
avoidance (over touch- feedback methods), and possibly allow software to distinguish between obstacles of different shapes
and sizes. There are several methods used to allow a sensor to detect obstacles from a distance. Light-based ranging sensors
use multiple methods for detecting obstacles and determining range. The simplest method uses the intensity of the reflected
light from an obstacle to estimate distance. However, this can be significantly affected by the color/reflectivity of the obstacle
and external light sources. A more common method is to use a beam of light projectedat an angle and a strip of detectors
spaced away from the emitter as in the animation to the right. The pictured Sharp sensor uses this method. This method is less
affected by the color/reflectivity of the object and ambient light.
LIDAR, a more advanced method of range detection, uses a laser that is swept across the sensor's field of view. The reflected
laser light is usually analyzed one of two ways. Units with longer ranges sometimes actually determine distance by measuring
the time it takes for the laser pulse to return to the sensor. This requires extremely fast timing circuitry. Another method uses
phase shift detection to determine range by analyzing the incoming light and comparing it toa reference signal.
Workingprinciple –
Triangulation: Use the triangle formed by the travelling path of the signal to calculate the distance
Time-of-flight: Use the time of flight of the signals to measure the distance
Typical range sensors – Infra-red range sensor (triangulation) – Ultrasonic sensors (time-of-flight) – Laser range sensor
(triangulation)
Ultrasonic Sensors
Emit a quick burst of ultrasound (50kHz), (human hearing: 20Hz to 20kHz) – Measure the elapsed time until the receiver
indicates that an echo is detected. – Determine how far away the nearest object is from the sensor
D=v*t
D = round-trip distance
v = speed of propagation (340 m/s)
t = elapsed time
Applications: – Distance Measurement – Mapping: Rotating proximity scans (maps the proximity of objects surrounding the
robot)
Limitations of Ultrasonic Sensors
• Background noises: If there are other ultrasonic sources, the sensor may detect signals emitted by another source.
• The speed of sound varies with air temperature and pressure – a 16 degree centigrade temperature change can cause a 30cm
error at 10m!
• Cross-talk problem: If a robot has more than one ultrasonic sensor, measurement ranges intersect, a sensor may receive
signals emitted by others.
• Poor surface reflection: Surface materials absorb ultrasonic waves.
• Surface orientation affects the reflection of ultrasonic signals.
2.7.3 Tactile sensor
Tactile sensors provide the robot with the capability to respond to contact forces between itself and other objects within its
work volume. Tactile sensors can be divided into
twotypes:
1. Touch sensors 2. Stress sensors
Touch sensors are used simply to indicate whether contact has been made with an object. A simple micro switch can serve the
purpose of a touch sensor. Stress sensors are used to measure the magnitude of the contact force. Strain gauge devices are
typically employed in force measuring sensors. use of robots with tactile sensing capabilities

2.7.4 Proximity sensor


Proximity sensors are used to sense when one object is close to another object. On a robot, the proximity sensors would be
located n or near the end effectors. This sensing capability can be engineered by means of optical proximity devices, eddy-
current proximity detectors, magnetic field sensors, or other devices. In robotics, proximity sensors might be used to indicate
the presence or absence of a work part or other object. They could also be helpful in preventing injury to the robots human
coworkers in the factory.
2.7.5 Optical or Infrared Light-Based sensors
This is one of the areas that is receiving a lot of attention in robotics research computerized visions systems will be an
important technology in future automated factories. Robot vision is made possible by means of video camera a sufficient light
source and acomputer programmed to process image data. The camera is mounted either on the robot or in a fixed position
above the robot so that its field of vision includes the robots work volume. The computer software enables the vision system to
sense the presence of a object and its position and orientation. Vision capability would enable the robot to carry out the
following kinds of operations. Retrieve parts which are randomly oriented on a conveyor Recognize particular parts which are
intermixed with other objects Perform assembly operations which requirealignment.
2.7.6 Proximity sensors
The simplest light-based obstacle sensor projects a light and looks for a reflection of certain strength. If the reflection is strong
enough, it can be inferred that an obstacle lies within a certain range of the sensor. Multiple light sources can be pulsed on in
sequence to give some resolution to the sensor.
2.7.7 Voice sensors Another area of robotics research is voice sensing or voice programming. Voice programming can be
defined as the oral communication of commands to the robot or other machine. The robot controller is equipped with a
speech recognition system which analyses the voice input and compares it with a set of stored word patterns when a match is
found between the input and the stored vocabulary word the robot performs some actions which corresponds to the word.
Voice sensors could be useful in robot programming to speed up the programming procedure just as it does in NC
programming. It would also be beneficial in especially in hazardous working environments for performing unique operations
such as maintenance and repair work. The robot could be placed in hazardous environment and remotely commanded to
perform the repair chores by means of step by step instructions.
Internal sensor
Internal sensors measure the robot's internal state. They are used to measure its position, velocity and acceleration.
2.7.8 Position sensor
Position sensors measure the position of a joint (the degree to which the joint is extended).
They include:
 Encoder: a digital optical device that converts motion into a sequence of digital pulses.
 Potentiometer: a variable resistance device that expresses linear or angular displacements in terms of voltage.
 Linear variable differential transformer: a displacement transducer that provides high
accuracy. It generates an AC signal whose magnitude is a function of the displacement of a moving core.
 Synchronous and Resolvers
2.7.9 Velocity Sensor :A velocity or speed sensor measures consecutive position measurements at known intervals and
computes the time rate of change in the position values.
2.7.10 Acceleration Sensors:
Acceleromete: An accelerometer measures acceleration (change in speed) of anything that it's mounted on. Inside an
accelerator MEMS device are tiny micro- structures that bend due to momentum and gravity. When it experiences any form of
acceleration, these tiny structures bend by an equivalent amount which can be electrically detected. accelerometers are easily
and cheaply available, making it a very viable sensor for cheap. Applications for Accelerometers are very important in the
sensor world because they can sense such a wide range of motion. They're used in the latest Apple Power books (and other
laptops) to detect when the computer's suddenly moved or tipped, so the hard drive can be lockedup during movement.
They're used in cameras, to control image stabilization functions. They're used in pedometers, gait meters, and other exercise
and physical therapy devices. They're used ingaming controls to generate tilt data. They're used in automobiles, to control
airbag release when there's a sudden stop. There are countless other applications for them.
Possible uses for accelerometers in robotics:
 Self balancing robots  Tilt-mode game controllers  Model airplane auto pilot  Alarm systems  Collision detection
 Human motion monitoring  Leveling sensor, inclinometer  Vibration Detectors for Vibration Isolators
 G-Force Detectors
2.7.11 Touch, Force, Torque:
A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are
generally modeled after the biological sense of coetaneous touch which is capable of detecting stimuli resulting from
mechanical stimulation, temperature, and pain (although pain sensing is not common in artificial tactile sensors). Tactile
sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touch
screen devices on mobile phones and computing.
Tactile sensors may be of different types including piezoresistive, piezoelectric, capacitive and elasto-resistive sensors
Force Sensors (Force Transducers) There are many types of force sensors, usually referred to as torque cells (to measure
torque) and load cells (to measure force). Force transducers are devices useful in directly measuring torques and forces within
your mechanical system. In order to get the most benefit from a force transducer, you must have a basic understanding of the
technology, construction, and operation of this unique device. Forces can be measured by measuring the deflection of an
elastic element. Strain gauges: Most common sensing elements of force. It converts the deformation to the change of its
resistance. Gauge resistance varies from 30 to 3K, corresponding to deformation from 30µm to 100 µm. Shaft torque is
measured with strain gauges mounted on a shaft with specially designed cross section.
2.7.12 Position Measurement
• An optical encoder is to measure the rotational angle of a motor shaft.
• It consists of a light beam, a light detector, and a rotating disc with a radial grating on its surface.
• The grating consists of black lines separated by clear spaces. The widths of the lines and spaces are the same. – Line: cut the
beam and hence a low signal output – Space: allow the beam to pass and hence a high signal output
• A train of pulses is generated with rotation of the disc. By counting the pulses, it is possible to know the rotational angle.

Trajectoryplanning
for robots involves determining the path and control commands that a robot will follow to move from its initial position to a
target position, while considering time, velocity, acceleration, and dynamics constraints. The goal is to create a smooth,
efficient, and feasible trajectory that ensures the robot’s safety, precision, and optimal performance. Various methods are used
for trajectory planning, including polynomial interpolation (e.g., cubic splines), piecewise linear paths, and velocity profiles like
trapezoidal or cubic velocity profiles. These profiles define the motion parameters (position, velocity, acceleration) over time,
ensuring smooth transitions and minimizing jerk. Constraints such as workspace boundaries, joint limits, and environmental
factors are incorporated to avoid collisions and maintain stability. Trajectory planning is essential for applications in industrial
automation, robotics, and autonomous vehicles, where precision and time efficiency are critical.

shrink and swell operators are fundamental morphological operations used in robotic vision and image processing to
manipulate and analyze object shapes in binary images. These operators are based on set theory and work by applying
structuring elements to an image.
 Shrink (Erosion): The shrink operator, or erosion, reduces the size of bright regions (foreground) in a binary image. It
erodes the boundaries of objects, making them smaller by removing pixels at the object’s boundary. This operation is
useful for eliminating small noise or gaps between objects.
 Swell (Dilation): The swell operator, or dilation, increases the size of bright regions by adding pixels to the boundaries
of objects. It can be used to connect nearby objects, fill holes, or expand object regions for better visibility.
Both operators are applied iteratively to achieve desired effects. When used together, shrink and swell can refine object
detection, enhance feature extraction, and improve the robustness of robotic vision systems in tasks like object recognition,
segmentation, and path planning.
Implementation of Industry 4.0 and Industrial Robots in Manufacturing Processes
The fourth industrial revolution, known as Industry 4.0, is transforming manufacturing processes through the integration of
advanced technologies. Emerging in 2011, Industry 4.0 encompasses robotics, automation, the Internet of Things (IoT), 3D
printing, smart sensors, and Radio Frequency Identification (RFID). These technologies enable intelligent automation, self-
configuration, self-diagnosis, and problem-solving capabilities, revolutionizing traditional manufacturing.
Historically, industrial revolutions have progressed from steam engines to electricity, IT systems, and now to cyber-physical
systems (CPS) and cloud computing. Industry 4.0 aims to create a flexible, interconnected production environment where
virtual and physical systems collaborate seamlessly. Key principles include interoperability, decentralization, virtualization, real-
time capability, service orientation, and modularity. These principles facilitate the integration of CPS, people, and production
processes, enabling real-time monitoring and decision-making.
The transition to Industry 4.0 involves significant challenges, such as changing business paradigms, legal issues, resource
planning, security concerns, and standardization. Successful implementation requires collaboration among all stakeholders,
including manufacturers, managers, designers, and end-users. The document emphasizes the importance of innovation,
efficiency, agility, and risk management in implementing Industry 4.0.
Industry 4.0 transforms traditional manufacturing into smart factories, enhancing productivity, quality, and competitiveness. It
underscores the need for a multidisciplinary approach and continuous education to adapt to the rapidly evolving technological
landscape. The implementation strategy of Industry 4.0 is to enable the adjustment of industrial production to complete
intelligent automation, which means introducing self-automation, self-configuration, self-diagnosis, problem-solving, and
intelligent decision-making.
In conclusion, Industry 4.0 is already present in all industrial branches, from production to the sale of finished products. By
introducing technologies such as Cloud Computing, Robotics & Automation, Intelligent Sensors, 3D printers, and RFID, we
witness changes in processes and technologies as well as the organization of manufacturing and sales. The complete
application of Industry 4.0 leads us towards smart factories, allowing companies to remain competitive on the global market.
Smart factories are not isolated from other social changes such as the development of businesses, science, and education,
which require changes in all segments because world knowledge is doubling every one or two years. The success or failure to
implement Industry 4.0 lies in the hands of all participants in the chain, from the manufacturer to the end user.

You might also like