Robotics and Automation: Unit V Ai and Other Research Trends in Robotics
Robotics and Automation: Unit V Ai and Other Research Trends in Robotics
Unsupervised Learning
This method of ML finds its application in areas were data has no
historical labels. Here, the system will not be provided with the "right
answer" and the algorithm should identify what is being shown. The main
aim here is to analyze the data and identify a pattern and structure within
the available data set. Transactional data serves as a good source of data
set for unsupervised learning.
For instance, this type of learning identifies customer segments with
similar attributes and then lets the business to treat them similarly in
marketing campaigns. Similarly, it can also identify attributes that
differentiate customer segments from one another. Either ways, it is about
identifying a similar structure in the available data set. Besides, these
algorithms can also identify outliers in the available data sets.
Some of the widely used techniques of unsupervised learning are -
• k-means clustering
• self-organizing maps
• value decomposition
• mapping of nearest neighbor
Semi-supervised Learning
This kind of learning is used and applied to the same kind of scenarios
where supervised learning is applicable. However, one must note that this
technique uses both unlabeled and labeled data for training. Ideally, a
small set of labeled data, along with a large volume of unlabeled data is
used, as it takes less time, money and efforts to acquire unlabeled data.
This type of machine learning is often used with methods, such as
regression, classification and prediction. Companies that usually find it
challenging to meet the high costs associated with labeled training
process opt for semi-supervised learning.
Reinforcement Learning
This is mainly used in navigation, robotics and gaming. Actions that yield
the best rewards are identified by algorithms that use trial and error
methods. There are three major components in reinforcement learning,
namely, the agent, the actions and the environment. The agent in this case
is the decision maker, the actions are what an agent does, and the
environment is anything that an agent interacts with. The main aim in this
kind of learning is to select the actions that maximize the reward, within a
specified time. By following a good policy, the agent can achieve the
goal faster.
Hence, the primary idea of reinforcement learning is to identify the best
policy or the method that helps businesses in achieving the goals faster.
While humans can create a few good models in a week, machine learning
is capable of developing thousands of such models in a week.
What are the Key Factors that Differentiate Data Mining, Deep
Learning and Machine Learning?
While all of these methodologies have one single goal of deriving
insights, patterns and trends to make more informed decisions, all of them
have different approaches to the same. Let's look at some of them below.
Data Mining
This process is a superset of numerous methods, which might involve
machine learning and traditional statistical methods, to derive useful
insights from the available data. It is primarily used to discover those
patterns in a data set, which were not known previously. This approach
includes machine learning, statistical algorithms, time series analysis, text
analytics, and other domains of analytics. Besides, data mining also
involves the study and practice of data manipulation and data storage.
Machine Learning
Like various statistical models available in the market, the main aim of
machine learning is to determine and properly understand the structure
and patterns hidden in data. Then, theoretical distributions are applied to
data sets to gain a better understanding. Every statistical model is backed
by mathematically proven theories. However, machine learning is hugely
based on the ability of the computers to dig deeper into the available data
to unleash a structure, even in the absence of a theory on what the data
structure could look like.
Machine learning models are tested using a validation error on new data
sets, contrary to going through a theoretical test that confirms a null
hypothesis. As machine learning is iterative in nature, in terms of learning
from data, the learning process can be automated easily, and the data is
analyzed until a clear pattern is identified.
Artificial intelligence
• Highly responsive
• Reliable
• Understandable
• High performance
Today a doctor would turn to the literature on the subject and examine
similar cases. But another option is entering the patient’s test results and
medical history into a computer program and let it compare to millions of
similar pathology records.
There are types and subtypes of cancer that are very rare and difficult to
distinguish. And different subtypes of cancer may require dramatically
different treatment plans.
Team’s model can help doctors make more accurate diagnosis based on
more comprehensive evidence. Imagine what great impact would of such
a system make if expanded across other institutions!
In fact, such systems were built long ago, and were the first successful
implication of Artificial Intelligence. But due to the poor development of
AI, NLP, the Expert Systems did not live up to the business-world
expectations and the term itself has left out from the IT-world lexicon.
MAJOR COMPONENTS
1. Knowledge base
The power of expert systems stems from the specific knowledge about a
narrow domain one stores. The knowledge base of an ES contains both
factual and heuristic knowledge.
2. Inference engine – the reasoning mechanism
Inference engine provides a methodology for reasoning about information
in the knowledge base. Its goal is to come up with a recommendation, and
to do so it combines the facts of a specific case (input data) with the
knowledge contained in the knowledge base.
Inference can be performed using semantics networks, production rules,
and logic statements.
There are two types of data-driven strategies – forward and backward
chaining. Forward chaining is applied to make predictions, whereas
backward chaining finds out the reasons why a certain act has happened.
3. User interface – the hardware and software that provides interaction
between program and users.
Telerobotics
Telerobotics is the area of robotics concerned with the control of semi-
autonomous robots from a distance, chiefly using Wireless network (like
Wi-Fi, Bluetooth, the Deep Space Network, and similar) or tethered
connections. It is a combination of two major subfields, teleoperation and
telepresence.
Space
Virtual reality
20th century
Software
Hardware
Paramount for the sensation of immersion into virtual reality are a high
frame rate (at least 95 fps), as well as a low latency
Modern virtual reality headset displays are based on technology
developed for smartphones including: gyroscopes and motion sensors for
tracking head, hand, and body positions; small HD screens for
stereoscopic displays; and small, lightweight and fast computer
processors. These components led to relative affordability for
independent VR developers, and lead to the 2012 Oculus Rift Kickstarter
offering the first independently developed VR headset.[44]
Independent production of VR images and video has increased by the
development of omnidirectional cameras, also known as 360-degree
cameras or VR cameras, that have the ability to record 360 interactive
photography, although at low-resolutions or in highly compressed
formats for online streaming of 360 video.[51] In contrast,
photogrammetry is increasingly used to combine several high-resolution
photographs for the creation of detailed 3D objects and environments in
VR applications.[52][53]
To create a feeling of immersion, special output devices are needed to
display virtual worlds. Well-known formats include head-mounted
displays or the CAVE. In order to convey a spatial impression, two
images are generated and displayed from different perspectives (stereo
projection). There are different technologies available to bring the
respective image to the right eye. A distinction is made between active
(e.g. shutter glasses) and passive technologies (e.g. polarizing filters or
Infitec).[citation needed]
Special input devices are required for interaction with the virtual world.
These include the 3D mouse, the wired glove, motion controllers, and
optical tracking sensors. Controllers typically use optical tracking
systems (primarily infrared cameras) for location and navigation, so that
the user can move freely without wiring. Some input devices provide the
user with force feedback to the hands or other parts of the body, so that
the human being can orientate himself in the three-dimensional world
through haptics and sensor technology as a further sensory sensation and
carry out realistic simulations. Additional haptic feedback can be
obtained from omnidirectional treadmills (with which walking in virtual
space is controlled by real walking movements) and vibration gloves and
suits.
Virtual reality cameras can be used to create VR photography using 360-
degree panorama videos. 360-degree camera shots can be mixed with
virtual elements to merge reality and fiction through special effects.[citation
needed]
VR cameras are available in various formats, with varying numbers
of lenses installed in the camera.
Applications[edit]
Main article: Applications of virtual reality
Apollo 11 astronaut Buzz Aldrin previewing the Destination: Mars VR
experience at the Kennedy Space Center Visitor Complex in 2016
Virtual reality is most commonly used in entertainment applications such
as video gaming and 3D cinema. Consumer virtual reality headsets were
first released by video game companies in the early-mid 1990s.
Beginning in the 2010s, next-generation commercial tethered headsets
were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR),
setting off a new wave of application development 3D cinema has been
used for sporting events, pornography, fine art, music videos and short
films. Since 2015, roller coasters and theme parks have incorporated
virtual reality to match visual effects with haptic feedback
In social sciences and psychology, virtual reality offers a cost- It can be
used as a form of therapeutic intervention. For instance, there is the case
of the virtual reality exposure therapy (VRET), a form of exposure
therapy for treating anxiety disorders such as post traumatic stress
disorder (PTSD) and phobias.
In medicine, simulated VR surgical environments under the supervision
of experts can provide effective and repeatable training at a low cost,
allowing trainees to recognize and amend errors as they occur Virtual
reality has been used in physical rehabilitation since the 2000s. Despite
numerous studies conducted, good quality evidence of its efficacy
compared to other rehabilitation methods without sophisticated and
expensive equipment is lacking for the treatment of Parkinson's disease A
2018 review on the effectiveness of mirror therapy by virtual Another
study was conducted that showed the potential for VR to promote
mimicry and revealed the difference between neurotypical and autism
spectrum disorder individuals in their response to a two-dimensional
avatar
Nanorobotics
Nanorobotics are an emerging technology field creating machines or
robots whose components are at or near the scale of a nanometer (10−9
meters)More specifically, nanorobotics (as opposed to microrobotics)
refers to the nanotechnology engineering discipline of designing and
building nanorobots, with devices ranging in size from 0.1–10
micrometres and constructed of nanoscale or molecular components. The
terms nanobot, nanoid, nanite, nanomachine, or nanomite have also been
used to describe such devices currently under research and development.
Biochip
Surface-bound systems
Virus-based
Nanomedicine
Unmanned vehicle
An unmanned vehicle or uncrewed vehicle is a vehicle without a person
on board. Uncrewed vehicles can either be remote controlled or remote
guided vehicles, or they can be autonomous vehicles which are capable
of sensing their environment and navigating on their own.
Types
A working remote controlled car was reported in the October 1921 issue
of RCA's World Wide Wireless magazine. The car was unmanned and
controlled wirelessly via radio; it was thought the technology could
someday be adapted to tanks.[1] In the 1930s, the USSR developed
Teletanks, a machine gun-armed tank remotely controlled by radio from
another tank. These were used in the Winter War (1939-1940 ) against
Finland and at the start of the Eastern Front after Germany invaded the
USSR in 1941. During World War II, the British developed a radio
control version of their Matilda II infantry tank in 1941. Known as
"Black Prince", it would have been used for drawing the fire of concealed
anti-tank guns, or for demolition missions. Due to the costs of converting
the transmission system of the tank to Wilson type gearboxes, an order
for 60 tanks was cancelled.[2]
From 1942, the Germans used the Goliath tracked mine for remote
demolition work. The Goliath was a small tracked vehicle carrying 60 kg
of explosive charge directed through a control cable. Their inspiration
was a miniature French tracked vehicle found after France was defeated
in 1940. The combination of cost, low speed, reliance on a cable for
control, and poor protection against weapons meant it was not considered
a success.
The first major mobile robot development effort named Shakey was
created during the 1960s as a research study for the Defense Advanced
Research Projects Agency (DARPA). Shakey was a wheeled platform
that had a TV camera, sensors, and a computer to help guide its
navigational tasks of picking up wooden blocks and placing them in
certain areas based on commands. DARPA subsequently developed a
series of autonomous and semi-autonomous ground robots, often in
conjunction with the U.S. Army. As part of the Strategic Computing
Initiative, DARPA demonstrated the Autonomous Land Vehicle, the first
UGV that could navigate completely autonomously on and off roads at
useful speeds.
Design
Platform
Sensors
Control systems
Autonomous
Space Applications
NASA's Mars Exploration Rover project includes two UGVs, Spirit and
Opportunity, that are still performing beyond the original design
parameters. This is attributed to redundant systems, careful handling, and
long-term interface decision making.[4] Opportunity (rover) and its twin,
Spirit (rover), six-wheeled, solar powered ground vehicles, were launched
in July 2003 and landed on opposite sides of Mars in January 2004. The
Spirit rover operated nominally until it became trapped in deep sand in
April 2009, lasting more than 20 times longer than expected Opportunity,
by comparison, has been operational for more than 12 years beyond its
intended lifespan of three months. Curiosity (rover) landed on Mars in
September 2011, and its original two-year mission has since been
extended indefinitely.
Agriculture
Manufacturing
In the manufacturing environment, UGVs are used for transporting
materials.[21] They are often automated and referred to as AGVs.
Aerospace companies use these vehicles for precision positioning and
transporting heavy, bulky pieces between manufacturing stations, which
are less time-consuming than using large cranes and can keep people
from engaging with dangerous areas.
Core issues
While traditional cognitive modeling approaches have assumed symbolic
coding schemes as a means for depicting the world, translating the world
into these kinds of symbolic representations has proven to be problematic
if not untenable. Perception and action and the notion of symbolic
representation are therefore core issues to be addressed in cognitive
robotics.
Starting point
Cognitive robotics views animal cognition as a starting point for the
development of robotic information processing, as opposed to more
traditional Artificial Intelligence techniques. Target robotic cognitive
capabilities include perception processing, attention
allocation, anticipation, planning, complex motor coordination, reasoning
about other agents and perhaps even about their own mental states.
Robotic cognition embodies the behavior of intelligent agents in the
physical world (or a virtual world, in the case of simulated cognitive
robotics). Ultimately the robot must be able to act in the real world.
Learning techniques
Motor Babble
Main article: Motor babbling
A preliminary robot learning technique called motor babbling involves
correlating pseudo-random complex motor movements by the robot with
resulting visual and/or auditory feedback such that the robot may begin
to expect a pattern of sensory feedback given a pattern of motor output.
Desired sensory feedback may then be used to inform a motor control
signal. This is thought to be analogous to how a baby learns to reach for
objects or learns to produce speech sounds. For simpler robot systems,
where for instance inverse kinematics may feasibly be used to transform
anticipated feedback (desired motor result) into motor output, this step
may be skipped.
Imitation
Once a robot can coordinate its motors to produce a desired result, the
technique of learning by imitation may be used. The robot monitors the
performance of another agent and then the robot tries to imitate that
agent. It is often a challenge to transform imitation information from a
complex scene into a desired motor result for the robot. Note that
imitation is a high-level form of cognitive behavior and imitation is not
necessarily required in a basic model of embodied animal cognition.
Knowledge acquisition
A more complex learning approach is "autonomous knowledge
acquisition": the robot is left to explore the environment on its own. A
system of goals and beliefs is typically assumed.
A somewhat more directed mode of exploration can be achieved by
"curiosity" algorithms, such as Intelligent Adaptive Curiosity[1][2] or
Category-Based Intrinsic Motivation.[3] These algorithms generally
involve breaking sensory input into a finite number of categories and
assigning some sort of prediction system (such as an Artificial Neural
Network) to each. The prediction system keeps track of the error in its
predictions over time. Reduction in prediction error is considered
learning. The robot then preferentially explores categories in which it is
learning (or reducing prediction error) the fastest.
Other architectures
Some researchers in cognitive robotics have tried using architectures such
as (ACT-R and Soar (cognitive architecture)) as a basis of their cognitive
robotics programs. These highly modular symbol-processing
architectures have been used to simulate operator performance and
human performance when modeling simplistic and symbolized laboratory
data. The idea is to extend these architectures to handle real-world
sensory input as that input continuously unfolds through time. What is
needed is a way to somehow translate the world into a set of symbols and
their relationships.
Questions
Some of the fundamental questions to still be answered in cognitive
robotics are:
• How much human programming should or can be involved to support
the learning processes?
• How can one quantify progress? Some of the adopted ways is the
reward and punishment. But what kind of reward and what kind of
punishment? In humans, when teaching a child for example, the
reward would be candy or some encouragement, and the punishment
can take many forms. But what is an effective way with robots?[citation
needed]
Evolutionary robotics
Evolutionary robotics (ER) is a methodology that uses evolutionary
computation to develop controllers and/or hardware for autonomous
robots. Algorithms in ER frequently operate on populations of candidate
controllers, initially selected from some distribution. This population is
then repeatedly modified according to a fitness function. In the case
of genetic algorithms (or "GAs"), a common method in evolutionary
computation, the population of candidate controllers is repeatedly grown
according to crossover, mutation and other GA operators and then culled
according to the fitness function. The candidate controllers used in ER
applications may be drawn from some subset of the set of artificial neural
networks, although some applications (including SAMUEL, developed at
the Naval Center for Applied Research in Artificial Intelligence) use
collections of "IF THEN ELSE" rules as the constituent parts of an
individual controller. It is theoretically possible to use any set of
symbolic formulations of a control law (sometimes called a policy in
the machine learning community) as the space of possible candidate
controllers. Artificial neural networks can also be used for robot
learning outside the context of evolutionary robotics. In particular, other
forms of reinforcement learning can be used for learning robot
controllers.
Developmental robotics is related to, but differs from, evolutionary
robotics. ER uses populations of robots that evolve over time, whereas
DevRob is interested in how the organization of a single robot's control
system develops through experience, over time.
Objectives
Evolutionary robotics is done with many different objectives, often at the
same time. These include creating useful controllers for real-world robot
tasks, exploring the intricacies of evolutionary theory (such as
the Baldwin effect), reproducing psychological phenomena, and finding
out about biological neural networks by studying artificial ones. Creating
controllers via artificial evolution requires a large number of evaluations
of a large population. This is very time consuming, which is one of the
reasons why controller evolution is usually done in software. Also, initial
random controllers may exhibit potentially harmful behaviour, such as
repeatedly crashing into a wall, which may damage the robot.
Transferring controllers evolved in simulation to physical robots is very
difficult and a major challenge in using the ER approach. The reason is
that evolution is free to explore all possibilities to obtain a high fitness,
including any inaccuracies of the simulation[citation needed]. This need for a
large number of evaluations, requiring fast yet accurate computer
simulations, is one of the limiting factors of the ER approach[citation needed].
In rare cases, evolutionary computation may be used to design the
physical structure of the robot, in addition to the controller. One of the
most notable examples of this was Karl Sims' demo for Thinking
Machines Corporation.
Motivation
Although there are no known humanoid species outside the genus Homo,
the theory of convergent evolution speculates that different species may
evolve similar traits, and in the case of a humanoid these traits may
include intelligence and bipedalism and other humanoid skeletal changes,
as a result of similar evolutionary pressures. American psychologist and
Dinosaur intelligence theorist Harry Jerison suggested the possibility of
sapient dinosaurs. In a 1978 presentation at the American Psychological
Association, he speculated that dromiceiomimus could have evolved into
a highly intelligent species like human beings.[2] In his book, Wonderful
Life, Stephen Jay Gould argues that if the tape of life were re-wound and
played back, life would have taken a very different course.[3] Simon
Conway Morris counters this argument, arguing that convergence is a
dominant force in evolution and that since the same environmental and
physical constraints act on all life, there is an "optimum" body plan that
life will inevitably evolve toward, with evolution bound to stumble upon
intelligence, a trait of primates, crows, and dolphins, at some point.[4]