0% found this document useful (0 votes)
112 views50 pages

Robotics and Automation: Unit V Ai and Other Research Trends in Robotics

Machine learning enables computers to learn from data without being explicitly programmed. It involves analyzing data to identify patterns and build analytical models. As computational processing has become more powerful and affordable, and data storage cheaper, machine learning has gained renewed interest from companies who can use it to gain insights and make informed decisions from vast amounts of data. Popular machine learning methods used by businesses today include supervised learning using labeled training data to predict outcomes, and unsupervised learning to discover hidden patterns in unlabeled data. Machine learning has many applications across industries like banking, retail, healthcare, transportation and more.

Uploaded by

Vino kumar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views50 pages

Robotics and Automation: Unit V Ai and Other Research Trends in Robotics

Machine learning enables computers to learn from data without being explicitly programmed. It involves analyzing data to identify patterns and build analytical models. As computational processing has become more powerful and affordable, and data storage cheaper, machine learning has gained renewed interest from companies who can use it to gain insights and make informed decisions from vast amounts of data. Popular machine learning methods used by businesses today include supervised learning using labeled training data to predict outcomes, and unsupervised learning to discover hidden patterns in unlabeled data. Machine learning has many applications across industries like banking, retail, healthcare, transportation and more.

Uploaded by

Vino kumar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

ROBOTICS AND AUTOMATION

UNIT V AI AND OTHER RESEARCH TRENDS IN ROBOTICS

Machine learning (ML) equips computers to learn and interpret


without being explicitly programmed to do so. Here, as the "computers",
also referred as the "models", are exposed to sets of new data, they adapt
independently and learn from earlier computations to interpret available
data and identify hidden patterns. This involves data analysis and
automation of analytical model-building using numerous ML algorithms.
ML enables computers and computing machines to search for and
identify hidden insights, without being programmed for where to look
for, when exposed to new data sets.
Although this technology is not new, it is now gaining fresh momentum
as there are numerous things to know about ML. The factors responsible
for resurging interest in ML are powerful and affordable computational
processing, continuously growing volumes of huge data sets, and
affordable data storage options. Today, companies can make informed
decisions by using ML algorithms to develop analytical models, which
uncover connections, trends and patterns with minimal or no human
intervention.

Evolution of Machine Learning


Today, machine learning is different from what it used to be in the past,
due to the emergence of advanced computing technologies. Initially, it
had gained momentum due to pattern recognition and the fact that
computers did not have to be programed to execute certain tasks to learn.
Many researchers who were interested in Artificial Intelligence (AI)
investigated this area further to find out whether computers could really
learn from data or not.
The focus here is on iterative learning. Machines begin to adapt to new
data that they are exposed to, over a period. Based on the patterns and
computations that are previously created, machines learn to repeat
decisions made in the past, in similar situations. This aspect of machines'
ability to learn from the existing patterns, is now gaining huge
momentum.
Today, people are sitting up and taking notice of the fact that machines
are now able to apply complicated mathematical calculations to areas,
such as big data, at a much faster rate. Consider Google Car for instance,
which is primarily built on the crux of machine learning. Another
important use of machine learning can be found in regular
recommendations that are rolled out by companies like Netflix and
Amazon - an example of machine learning in everyday life. Next, ML
can also be combined with linguistic rules creation. This application is
implemented by Twitter, where you will know what customers say about
you. And not to forget, machine learning is significantly being used to
detect fraud in various industry sectors.

What Should You Know About Machine Learning?


Gone are the days when programmers would tell a machine how to solve
a problem at hand. We are in the era of machine learning where machines
are left to solve problems, on their own, by identifying the patterns in
each data set. Analyzing hidden trends and patterns makes it easy to
predict future problems and prevent them from occurring.
A machine learning algorithm usually follows a certain type of data and
then uses the patterns hidden in that data to answer more questions. For
example showing a computer a series of photographs, some of which say
that "this is a horse" and some of which say "this is not a horse." After
this exercise, if you show some more photographs to the same computer,
it will be on a mission to identify which of those photographs are of a
horse and which of those are not that of a horse. Every correct and
incorrect guess of the computer is added to its memory, which makes it
smarter in the longer run and enriches its learning over a period.

How Does Machine Learning Work?


To get the maximum value from big data, businesses must know exactly
how to pair the right algorithm with a particular tool or process and build
machine learning models based on iterative learning processes. Some of
the key machine learning algorithms are -
• Random forests
• Neural networks
• Discovery of sequence and associations
• Decision trees
• Mapping of nearest neighbor
• Supporting vector machines
• Boosting and bagging gradient
• Self organizing maps
• Multivariate adaptive regression
• SEO
• Analysis of principal components
As mentioned above, the secret to successfully harnessing the
applications of ML lies in not just knowing the algorithms, but in pairing
them accurately with the right tools and processes, which include -
• Data exploration followed by visualization of model results

• Overall data quality and management


• Easy model deployment to quickly get reliable and repeatable results
• Developing graphical user interface for creating process flows and
building models
• Comparing various machine learning models and identifying the best
• Identify best performers through automated ensemble model
evaluation
• Automated data-to-decision process

Why is Machine Learning So Important in Today's Business


Scenario?
Most of the industries dealing with huge amounts of data have now
recognized the value of machine learning. By gleaning hidden insights
from this data, businesses can work more efficiently and can also gain a
competitive edge. Besides, affordable and easy computational processing
and cost-effective data storage options have made it feasible to develop
models that quickly and accurately analyze huge chunks of complex data.
Apart from enabling enterprises to identify trends and patters from
diverse data sets, ML also enables businesses to automate analysis, which
was traditionally done by humans. Using ML organizations can deliver
personalized services and differentiated products that precisely cater to
varying needs of the customers. Additionally, ML also helps companies
to identify opportunities that can be profitable in the long run.
If you are planning to develop effective machine learning systems for
augmenting your business, then here is what it takes -
• Superior data preparation capabilities

• Knowledge of basic and advanced algorithms


• Scalability
• Automation and iterative processes
• Knowledge of ensemble modeling

Applications of Machine Learning


The value of machine learning technology has been recognized by
companies across several industries that deal with huge volumes of data.
By leveraging insights obtained from this data, companies are able work
in an efficient manner to control costs as well as get an edge over their
competitors. This is how some sectors / domains are implementing
machine learning -
• Financial Services

Companies in the financial sector are able to identify key insights in


financial data as well as prevent any occurrences of financial fraud,
with the help of machine learning technology. The technology is also
used to identify opportunities for investments and trade. Usage of
cyber surveillance helps in identifying those individuals or
institutions which are prone to financial risk, and take necessary
actions in time to prevent fraud.
• Marketing and Sales
Companies are using machine learning technology to analyze the
purchase history of their customers and make personalized product
recommendations for their next purchase. This ability to capture,
analyze, and use customer data to provide a personalized shopping
experience is the future of sales and marketing.
• Government
Government agencies like utilities and public safety have a specific
need FOR Ml, as they have multiple data sources, which can be
mined for identifying useful patterns and insights. For example
sensor data can be analyzed to identify ways to minimize costs and
increase efficiency. Furthermore, ML can also be used to minimize
identity thefts and detect fraud.
• Healthcare
With the advent of wearable sensors and devices that use data to
access health of a patient in real time, ML is becoming a fast-growing
trend in healthcare. Sensors in wearable provide real-time patient
information, such as overall health condition, heartbeat, blood
pressure and other vital parameters. Doctors and medical experts can
use this information to analyze the health condition of an individual,
draw a pattern from the patient history, and predict the occurrence of
any ailments in the future. The technology also empowers medical
experts to analyze data to identify trends that facilitate better
diagnoses and treatment.
• Transportation
Based on the travel history and pattern of traveling across various
routes, machine learning can help transportation companies predict
potential problems that could arise on certain routes, and accordingly
advise their customers to opt for a different route. Transportation
firms and delivery organizations are increasingly using machine
learning technology to carry out data analysis and data modeling to
make informed decisions and help their customers make smart
decisions when they travel.
• Oil and Gas
This is perhaps the industry that needs the application of machine
learning the most. Right from analyzing underground minerals and
finding new energy sources to streaming oil distribution, ML
applications for this industry are vast and are still expanding.

Popular Machine Learning Methods in Use Today


Although supervised and unsupervised learning are two of the most
widely accepted machine learning methods by businesses today, there are
various other machine learning techniques. Following is an overview of
some of the most accepted ML methods -
Supervised Learning
These algorithms are trained using labeled examples, in different
scenarios, as an input where the desired outcome is already known. An
equipment, for instance, could have data points such as "F" and "R"
where "F" represents "failed" and "R" represents "runs".
A learning algorithm will receive a set of input instructions along with
the corresponding accurate outcomes. The learning algorithm will then
compare the actual outcome with the accurate outcome and flag an error,
if there is any discrepancy. Using different methods, such as regression,
classification, gradient boosting, and prediction, supervised learning uses
different patterns to proactively predict the values of a label on extra
unlabeled data. This method is commonly used in areas where historical
data is used to predict events that are likely to occur in the future. For
instance, anticipate when a credit card transaction is likely to be
fraudulent or predict which insurance customers are likely to file their
claims.

Unsupervised Learning
This method of ML finds its application in areas were data has no
historical labels. Here, the system will not be provided with the "right
answer" and the algorithm should identify what is being shown. The main
aim here is to analyze the data and identify a pattern and structure within
the available data set. Transactional data serves as a good source of data
set for unsupervised learning.
For instance, this type of learning identifies customer segments with
similar attributes and then lets the business to treat them similarly in
marketing campaigns. Similarly, it can also identify attributes that
differentiate customer segments from one another. Either ways, it is about
identifying a similar structure in the available data set. Besides, these
algorithms can also identify outliers in the available data sets.
Some of the widely used techniques of unsupervised learning are -
• k-means clustering

• self-organizing maps
• value decomposition
• mapping of nearest neighbor
Semi-supervised Learning
This kind of learning is used and applied to the same kind of scenarios
where supervised learning is applicable. However, one must note that this
technique uses both unlabeled and labeled data for training. Ideally, a
small set of labeled data, along with a large volume of unlabeled data is
used, as it takes less time, money and efforts to acquire unlabeled data.
This type of machine learning is often used with methods, such as
regression, classification and prediction. Companies that usually find it
challenging to meet the high costs associated with labeled training
process opt for semi-supervised learning.

Reinforcement Learning
This is mainly used in navigation, robotics and gaming. Actions that yield
the best rewards are identified by algorithms that use trial and error
methods. There are three major components in reinforcement learning,
namely, the agent, the actions and the environment. The agent in this case
is the decision maker, the actions are what an agent does, and the
environment is anything that an agent interacts with. The main aim in this
kind of learning is to select the actions that maximize the reward, within a
specified time. By following a good policy, the agent can achieve the
goal faster.
Hence, the primary idea of reinforcement learning is to identify the best
policy or the method that helps businesses in achieving the goals faster.
While humans can create a few good models in a week, machine learning
is capable of developing thousands of such models in a week.

What are the Key Factors that Differentiate Data Mining, Deep
Learning and Machine Learning?
While all of these methodologies have one single goal of deriving
insights, patterns and trends to make more informed decisions, all of them
have different approaches to the same. Let's look at some of them below.

Data Mining
This process is a superset of numerous methods, which might involve
machine learning and traditional statistical methods, to derive useful
insights from the available data. It is primarily used to discover those
patterns in a data set, which were not known previously. This approach
includes machine learning, statistical algorithms, time series analysis, text
analytics, and other domains of analytics. Besides, data mining also
involves the study and practice of data manipulation and data storage.

Machine Learning
Like various statistical models available in the market, the main aim of
machine learning is to determine and properly understand the structure
and patterns hidden in data. Then, theoretical distributions are applied to
data sets to gain a better understanding. Every statistical model is backed
by mathematically proven theories. However, machine learning is hugely
based on the ability of the computers to dig deeper into the available data
to unleash a structure, even in the absence of a theory on what the data
structure could look like.
Machine learning models are tested using a validation error on new data
sets, contrary to going through a theoretical test that confirms a null
hypothesis. As machine learning is iterative in nature, in terms of learning
from data, the learning process can be automated easily, and the data is
analyzed until a clear pattern is identified.

Artificial intelligence

Machine learning is an application of artificial intelligence (AI) that


provides systems the ability to automatically learn and improve from
experience without being explicitly programmed. Machine learning
focuses on the development of computer programs that can access data
and use it learn for themselves.

The process of learning begins with observations or data, such as


examples, direct experience, or instruction, in order to look for patterns in
data and make better decisions in the future based on the examples that
we provide. The primary aim is to allow the computers learn
automatically without human intervention or assistance and adjust
actions accordingly.

Some machine learning methods


Machine learning algorithms are often categorized as supervised or
unsupervised.
• Supervised machine learning algorithms can apply what has
been learned in the
• past to new data using labeled examples to predict future events.
Starting from the analysis of a known training dataset, the learning
algorithm produces an inferred function to make predictions about
the output values. The system is able to provide targets for any new
input after sufficient training. The learning algorithm can also
compare its output with the correct, intended output and find errors
in order to modify the model accordingly.
• In contrast, unsupervised machine learning algorithms are used
when the information used to train is neither classified nor labeled.
Unsupervised learning studies how systems can infer a function to
describe a hidden structure from unlabeled data. The system
doesn’t figure out the right output, but it explores the data and can
draw inferences from datasets to describe hidden structures from
unlabeled data.
• Semi-supervised machine learning algorithms fall somewhere in
between supervised and unsupervised learning, since they use both
labeled and unlabeled data for training – typically a small amount
of labeled data and a large amount of unlabeled data. The systems
that use this method are able to considerably improve learning
accuracy. Usually, semi-supervised learning is chosen when the
acquired labeled data requires skilled and relevant resources in
order to train it / learn from it. Otherwise, acquiringunlabeled data
generally doesn’t require additional resources.
• Reinforcement machine learning algorithms is a learning
method that interacts with its environment by producing actions
and discovers errors or rewards. Trial and error search and delayed
reward are the most relevant characteristics of reinforcement
learning. This method allows machines and software agents to
automatically determine the ideal behavior within a specific
context in order to maximize its performance. Simple reward
feedback is required for the agent to learn which action is best; this
is known as the reinforcement signal.

Machine learning enables analysis of massive quantities of data. While it


generally delivers faster, more accurate results in order to identify
profitable opportunities or dangerous risks, it may also require additional
time and resources to train it properly. Combining machine learning with
AI and cognitive technologies can make it even more effective in
processing large volumes of information.
Expert system

What is an expert system?

In artificial intelligence, an expert system is a computer system


that emulates the decision-making ability of a human expert. Expert
systems are designed to solve complex problems by reasoning through
bodies of knowledge, represented mainly as if-then rules rather than
through conventional procedural code.

Expert systems have specific knowledge to one problem domain, e.g.,


medicine, science, engineering, etc. The expert’s knowledge is called a
knowledge base, and it contains accumulated experience that has been
loaded and tested in the system. Much like other artificial intelligence
systems, expert system’s knowledge may be enhanced with add-ons to
the knowledge base, or additions to the rules. The more experience
entered into the expert system, the more the system can improve its
performance.

Characteristics of expert systems:

• Highly responsive
• Reliable
• Understandable
• High performance

Expert systems today:

Although the public opinion differs on if our jobs will be replaced by


artificial intelligence or not, expert systems are the artificial intelligence
that will come for analytical, white collar jobs. Expert systems are
proficient in reasoning, classification, configuration, pattern matching,
diagnosis, and planning, certain industries are set up for disruption.
Financial services, healthcare, customer service, aviation, and written
communication can all be carried out by expert systems.
The first expert system to be approved by the American Medical
Association was the Pathfinder system. Built at Stanford University in the
1980s, this decision-theoretic expert system was built for
hematopathology diagnosis. In short – Pathfinder is an expert system that
seeks and diagnoses lymph-node diseases. In the end, Pathfinder deals
with over 60 diseases and can recognize over 100 symptoms. The latest
version of Pathfinder outperforms its creators - the world’s leading
pathologist.

Expert systems in business:

As expected, expert systems are being developed and deployed


worldwide in myriad applications, mainly because of their symbolic
reasoning and its explanation capabilities. A recently developed expert
system ROSS, the AI attorney. ROSS is supported by self-learning
systems that use data mining, pattern recognition, deep learning, and
natural language processing to mimic the way the human brain works.
It may not be time for expert systems in your enterprise, but it is an
exciting development to watch.

THE RETURN OF EXPERT SYSTEMS

Today a doctor would turn to the literature on the subject and examine
similar cases. But another option is entering the patient’s test results and
medical history into a computer program and let it compare to millions of
similar pathology records.

There are types and subtypes of cancer that are very rare and difficult to
distinguish. And different subtypes of cancer may require dramatically
different treatment plans.

To address this problem, a team from MIT’s Computer Science and


Artificial Intelligence Laboratory introduced a model that aims to
automatically distinguish the type of lymphoma – a group of blood
cancers.
The model uses many techniques, among them are Natural Language
Processing and Machine Learning.

The elaborated framework analyses pathology reports, which provide a


comprehensive scope of measurements, observations, and interpretations
made by pathologists – all expressed in natural language. With detailed
feature analysis, their system generates meaningful features and medical
insights into lymphoma classification.

Team’s model can help doctors make more accurate diagnosis based on
more comprehensive evidence. Imagine what great impact would of such
a system make if expanded across other institutions!

Now answers and advice of specialized domain can be provided not


only by an expert, but via a software program as well.
And such software could be created for professionals from any sphere –
from insurance adjusters to engineers and managers.

And do it even better, than a human.

WHAT IS AN EXPERT SYSTEM

An expert system (ES) is a program that is designed to solve


problems within specialized domain that ordinary requires a human
expert.
By mimicking the thinking of the human experts, the system can perform
the analysis, design, or monitoring, make decisions and more.

In fact, such systems were built long ago, and were the first successful
implication of Artificial Intelligence. But due to the poor development of
AI, NLP, the Expert Systems did not live up to the business-world
expectations and the term itself has left out from the IT-world lexicon.

But now, with the rapid development and prominent advancements of


Artificial Intelligence, Machine Learning, Deep Learning and Natural
Language Processing we are about to observe the comeback of them.
They can be called under different names, but the essence stays the same
– solving expert-level issues.

MAJOR COMPONENTS
1. Knowledge base
The power of expert systems stems from the specific knowledge about a
narrow domain one stores. The knowledge base of an ES contains both
factual and heuristic knowledge.
2. Inference engine – the reasoning mechanism
Inference engine provides a methodology for reasoning about information
in the knowledge base. Its goal is to come up with a recommendation, and
to do so it combines the facts of a specific case (input data) with the
knowledge contained in the knowledge base.
Inference can be performed using semantics networks, production rules,
and logic statements.
There are two types of data-driven strategies – forward and backward
chaining. Forward chaining is applied to make predictions, whereas
backward chaining finds out the reasons why a certain act has happened.
3. User interface – the hardware and software that provides interaction
between program and users.

TYPES OF PROBLEMS THAT CAN BE SOLVED

ESs can be differentiated by the action they perform or a type of problem


they help resolve:
Classification & Diagnosis: identify an object based on stated
characteristics
Examples: medical disease diagnosis, insurance application fraud
detection
Monitoring: continuously comparing data with prescribed behavior
Examples: leakage monitoring in long petroleum pipeline, founding out
faults in vehicles
Prediction: showing the optimal plan
Examples: prediction of share market status, contract estimation
Design: configuring a system according to specifications
Examples: airline scheduling, cargo scheduling

BENEFITS OF THE EXPERT SYSTEMS

Expert knowledge becomes available


Expertise is very difficult to obtain and capture.
At a certain point, many experts deepen their understanding to such a
degree, that their decisions become somewhat intuitive. As a result, their
explanations wouldn’t be of much help. Besides, their time is precious
and should not be dispersed on indirect tasks too often.
But once expert knowledge was mined and stored into the software in a
structured way, it can then be easily retrieved and comprehended.

Pieces of information are taken together


The specialists whom an a professional might like to consult may be not
within reach. Also, a specialist may be not aware of modern inventions,
new studies and discoveries related to a part of their job.
An ES can be of great help by offering knowledge of similar cases,
especially if used by an international company. Besides, an ES can also
serve as a self-check tool.

Automation & speed


ESs offer great speed and reduce the amount of work an individual puts
in.

Reduced errors and risk


An ESs error rate is lower as compared to human errors.
Not to mention the fact that they can work in the environment dangerous
to humans.

Help even to non-experts


An ES can help by serving as a training tool for young employees and
non-experts.

Artificial Intelligence and Expert Systems

A short reminder: Artificial Intelligence is the field of Computer Science


that is devoted to giving the machines features that associated with
human intelligence. These include reasoning, evaluation, learning,
language recognition, decision-making and problem solving.
Expert systems were the first successful implication of Artificial
Intelligence to the purposes of business.
Their decision-making was rule-based – it consisted of the great number
of “if – then” rules.
For instance, “If it is sunny, then I’ll go swimming”, and so on.

Rule-based systems are the simplest form of Artificial Intelligence.


But such approach wasn’t enough for a really powerful, robust ES.
Rule-based decision couldn’t deal with many issues. For example, the
systems often failed when faced with a new, not hard-coded situation. It
was also challenging to gather expert knowledge (“data acquisition”
problem) and construct a knowledge base.
As a result, Expert Systems did not live up to the business-world
expectations. For a while, they have sunk into oblivion.
A shift from rule-based approach to a data-driven one paved the way
to a new era in Artificial Intelligence
Prior to advancements in AI, there was a serious increase in computing
power capabilities.
Also, data became easier to gather and inexpensive to store.
Then, the whole AI paradigm has changed.
Instead of making a system that is attempting to draw logical conclusions
based on predefined rules, AI-software began use a data-driven and
probability-based approach.
By exposing large quantities of known facts to a learning mechanism, and
performing tuning sessions, you get a system can make predictions or
identifications of unseen cases. This is approach of constant trial and
errors.
That is, in essence, the underlying concepts of Machine Learning.
For Expert Systems, it seems, the tide has turned
If we define an Expert System by its direct use – as a software intended to
solve expert-level problems and tasks, rather than by the method of
achieving it, we are quite sure they are about to return.
Diagnostic expert system applications continue to be the most popular.
One of the example could be IBM’s Watson is better at diagnosing cancer
than human doctors.
Also, in more recent years recommendation systems have taken over in
recommending products to customers. A notable development was the
Netflix contest for movie recommendations which led to a burst of
innovation and interest in the area. And this trend will continue.
There is every reason to believe that the recent advancements of Artificial
Intelligence technologies would greatly contribute to the further
development of expert systems.

Telerobotics
Telerobotics is the area of robotics concerned with the control of semi-
autonomous robots from a distance, chiefly using Wireless network (like
Wi-Fi, Bluetooth, the Deep Space Network, and similar) or tethered
connections. It is a combination of two major subfields, teleoperation and
telepresence.

Teleoperation indicates operation of a machine at a distance. It is similar


in meaning to the phrase "remote control" but is usually encountered in
research, academic and technical environments. It is most commonly
associated with robotics and mobile robots but can be applied to a whole
range of circumstances in which a device or machine is operated by a
person from a distance.[1]

Early Telerobotics (Rosenberg, 1992) US Air Force - Virtual Fixtures


system
Teleoperation is the most standard term, used both in research and
technical communities, for referring to operation at a distance. This is
opposed to "telepresence", which refers to the subset of telerobotic
systems configured with an immersive interface such that the operator
feels present in the remote environment, projecting his or her presence
through the remote robot. One of the first telepresence systems that
enabled operators to feel present in a remote environment through all of
the primary senses (sight, sound, and touch) was the Virtual Fixtures
system developed at US Air Force Research Laboratories in the early
1990s. The system enabled operators to perform dexterous tasks
(inserting pegs into holes) remotely such that the operator would feel as if
he or she was inserting the pegs when in fact it was a robot remotely
performing the task.[
A telemanipulator (or teleoperator) is a device that is controlled
remotely by a human operator. In simple cases the controlling operator's
command actions correspond directly to actions in the device controlled,
as for example in a radio controlled model aircraft or a tethered deep
submergence vehicle. Where communications delays make direct control
impractical (such as a remote planetary rover), or it is desired to reduce
operator workload (as in a remotely controlled spy or attack aircraft), the
device will not be controlled directly, instead being commanded to follow
a specified path. At increasing levels of sophistication the device may
operate somewhat independently in matters such as obstacle avoidance,
also commonly employed in planetary rovers.
Devices designed to allow the operator to control a robot at a distance are
sometimes called telecheric robotics.
Two major components of telerobotics and telepresence are the visual
and control applications. A remote camera provides a visual
representation of the view from the robot. Placing the robotic camera in a
perspective that allows intuitive control is a recent technique that
although based in Science Fiction (Robert A. Heinlein's Waldo 1942) has
not been fruitful as the speed, resolution and bandwidth have only
recently been adequate to the task of being able to control the robot
camera in a meaningful way. Using a head mounted display, the control
of the camera can be facilitated by tracking the head as shown in the
figure below.
This only works if the user feels comfortable with the latency of the
system, the lag in the response to movements, the visual representation.
Any issues such as, inadequate resolution, latency of the video image, lag
in the mechanical and computer processing of the movement and
response, and optical distortion due to camera lens and head mounted
display lenses, can cause the user 'simulator sickness' that is exacerbated
by the lack of vestibular stimulation with visual representation of motion.
Mismatch between the users motions such as registration errors, lag in
movement response due to overfiltering, inadequate resolution for small
movements, and slow speed can contribute to these problems.
The same technology can control the robot, but then the eye–hand
coordination issues become even more pervasive through the system, and
user tension or frustration can make the system difficult to use.
The tendency to build robots has been to minimize the degrees of
freedom because that reduces the control problems. Recent improvements
in computers has shifted the emphasis to more degrees of freedom,
allowing robotic devices that seem more intelligent and more human in
their motions. This also allows more direct teleoperation as the user can
control the robot with their own motions.[5]
Interfaces[edit]
A telerobotic interface can be as simple as a common MMK (monitor-
mouse-keyboard) interface. While this is not immersive, it is inexpensive.
Telerobotics driven by internet connections are often of this type. A
valuable modification to MMK is a joystick, which provides a more
intuitive navigation scheme for planar robot movement.
Dedicated telepresence setups utilize a head mounted display with either
single or dual eye display, and an ergonomically matched interface with
joystick and related button, slider, trigger controls.
Other interfaces merge fully immersive virtual reality interfaces and real-
time video instead of computer-generated images.[6] Another example
would be to use an omnidirectional treadmill with an immersive display
system so that the robot is driven by the person walking or running.
Additional modifications may include merged data displays such as
Infrared thermal imaging, real-time threat assessment, or device
schematics.[citation needed]
Applications

Space

NASA HERRO (Human Exploration using Real-time Robotic


Operations) telerobotic exploration concept[7]
With the exception of the Apollo program, most space exploration has
been conducted with telerobotic space probes. Most space-based
astronomy, for example, has been conducted with telerobotic telescopes.
The Russian Lunokhod-1 mission, for example, put a remotely driven
rover on the moon, which was driven in real time (with a 2.5-second
lightspeed time delay) by human operators on the ground. Robotic
planetary exploration programs use spacecraft that are programmed by
humans at ground stations, essentially achieving a long-time-delay form
of telerobotic operation. Recent noteworthy examples include the Mars
exploration rovers (MER) and the Curiosity rover. In the case of the MER
mission, the spacecraft and the rover operated on stored programs, with
the rover drivers on the ground programming each day's operation. The
International Space Station (ISS) uses a two-armed telemanipulator called
Dextre. More recently, a humanoid robot Robonaut[8] has been added to
the space station for telerobotic experiments.
NASA has proposed use of highly capable telerobotic systems[9] for
future planetary exploration using human exploration from orbit. In a
concept for Mars Exploration proposed by Landis, a precursor mission to
Mars could be done in which the human vehicle brings a crew to Mars,
but remains in orbit rather than landing on the surface, while a highly
capable remote robot is operated in real time on the surface.[10] Such a
system would go beyond the simple long time delay robotics and move to
a regime of virtual telepresence on the planet. One study of this concept,
the Human Exploration using Real-time Robotic Operations (HERRO)
concept, suggested that such a mission could be used to explore a wide
variety of planetary destinations.

Telepresence and videoconferencing

iRobot Ava 500, an autonomous roaming telepresence robot.


The prevalence of high quality video conferencing using mobile devices,
tablets and portable computers has enabled a drastic growth in
telepresence robots to help give a better sense of remote physical
presence for communication and collaboration in the office, home,
school, etc. when one cannot be there in person. The robot avatar can
move or look around at the command of the remote person.[11][12]
There have been two primary approaches that both utilize
videoconferencing on a display 1) desktop telepresence robots - typically
mount a phone or tablet on a motorized desktop stand to enable the
remote person to look around a remote environment by panning and
tilting the display or 2) drivable telepresence robots - typically contain a
display (integrated or separate phone or tablet) mounted on a roaming
base. Some examples of desktop telepresence robots include Kubi by
Revolve Robotics, Galileo by Motrr, and Swivl. Some examples of
roaming telepresence robots include Beam by Suitable Technologies,
Double by Double Robotics, RP-Vita by iRobot and InTouch Health,
Anybots, Vgo, TeleMe by Mantarobot, and Romo by Romotive. More
modern roaming telepresence robots may include an ability to operate
autonomously. The robots can map out the space and be able to avoid
obstacles while driving themselves between rooms and their docking
stations.
Traditional videoconferencing systems and telepresence rooms generally
offer Pan / Tilt / Zoom cameras with far end control. The ability for the
remote user to turn the device's head and look around naturally during a
meeting is often seen as the strongest feature of a telepresence robot. For
this reason, the developers have emerged in the new category of desktop
telepresence robots that concentrate on this strongest feature to create a
much lower cost robot. The desktop telepresence robots, also called head
and neck Robots allow users to look around during a meeting and are
small enough to be carried from location to location, eliminating the need
for remote navigation
Some telepresence robots are highly helpful for some long-term illness
children, who were unable to attend school regularly. Latest innovative
technologies can bring people together, and it allows them to stay
connected to each other, which significantly help them to overcome
loneliness.
Marine applications

Marine remotely operated vehicles (ROVs) are widely used to work in


water too deep or too dangerous for divers. They repair offshore oil
platforms and attach cables to sunken ships to hoist them. They are
usually attached by a tether to a control center on a surface ship. The
wreck of the Titanic was explored by an ROV, as well as by a crew-
operated vessel.
Telemedicine

See also: Remote surgery and Medical robot


Additionally, a lot of telerobotic research is being done in the field of
medical devices, and minimally invasive surgical systems. With a robotic
surgery system, a surgeon can work inside the body through tiny holes
just big enough for the manipulator, with no need to open up the chest
cavity to allow hands inside.

Virtual reality

Virtual reality (VR) is a simulated experience that can be similar to or


completely different from the real world. Applications of virtual reality
can include entertainment (i.e. gaming) and educational purposes (i.e.
medical or military training). Other, distinct types of VR style technology
include augmented reality and mixed reality.
Currently standard virtual reality systems use either virtual reality
headsets or multi-projected environments to generate realistic images,
sounds and other sensations that simulate a user's physical presence in a
virtual environment. A person using virtual reality equipment is able to
look around the artificial world, move around in it, and interact with
virtual features or items. The effect is commonly created by VR headsets
consisting of a head-mounted display with a small screen in front of the
eyes, but can also be created through specially designed rooms with
multiple large screens. Virtual reality typically incorporates auditory and
video feedback, but may also allow other types of sensory and force
feedback through haptic technology.
"Virtual" has had the meaning of "being something in essence or effect,
though not actually or in fact" since the mid-1400s.[1] The term "virtual"
has been used in the computer sense of "not physically existing but made
to appear by software" since 1959.[1]
In 1938, French avant-garde playwright Antonin Artaud described the
illusory nature of characters and objects in the theatre as "la réalité
virtuelle" in a collection of essays, Le Théâtre et son double. The English
translation of this book, published in 1958 as The Theater and its
Double,[2] is the earliest published use of the term "virtual reality". The
term "artificial reality", coined by Myron Krueger, has been in use since
the 1970s. The term "virtual reality" was first used in a science fiction
context in The Judas Mandala, a 1982 novel by Damien Broderick.
Forms and methods
Further information: Immersion (virtual reality) and Reality–virtuality
continuum
One method by which virtual reality can be realized is simulation-based
virtual reality. Driving simulators, for example, give the driver on board
the impression of actually driving an actual vehicle by predicting
vehicular motion caused by driver input and feeding back corresponding
visual, motion and audio cues to the driver.
With avatar image-based virtual reality, people can join the virtual
environment in the form of real video as well as an avatar. One can
participate in the 3D distributed virtual environment as form of either a
conventional avatar or a real video. A user can select own type of
participation based on the system capability.
In projector-based virtual reality, modeling of the real environment plays
a vital role in various virtual reality applications, such as robot
navigation, construction modeling, and airplane simulation. Image-based
virtual reality systems have been gaining popularity in computer graphics
and computer vision communities. In generating realistic models, it is
essential to accurately register acquired 3D data; usually, a camera is
used for modeling small objects at a short distance.
Desktop-based virtual reality involves displaying a 3D virtual world on a
regular desktop display without use of any specialized positional tracking
equipment. Many modern first-person video games can be used as an
example, using various triggers, responsive characters, and other such
interactive devices to make the user feel as though they are in a virtual
world. A common criticism of this form of immersion is that there is no
sense of peripheral vision, limiting the user's ability to know what is
happening around them.

A Missouri National Guardsman looks into a VR training head-mounted


display at Fort Leonard Wood in 2015
A head-mounted display (HMD) more fully immerses the user in a virtual
world. A virtual reality headset typically includes two small high
resolution OLED or LCD monitors which provide separate images for
each eye for stereoscopic graphics rendering a 3D virtual world, a
binaural audio system, positional and rotational real-time head tracking
for six degrees of movement. Options include motion controls with haptic
feedback for physically interacting within the virtual world in an intuitive
way with little to no abstraction and an omnidirectional treadmill for
more freedom of physical movement allowing the user to perform
locomotive motion in any direction.
Augmented reality (AR) is a type of virtual reality technology that blends
what the user sees in their real surroundings with digital content
generated by computer software. The additional software-generated
images with the virtual scene typically enhance how the real surroundings
look in some way. AR systems layer virtual information over a camera
live feed into a headset or smartglasses or through a mobile device giving
the user the ability to view three-dimensional images.
Mixed reality (MR) is the merging of the real world and virtual worlds to
produce new environments and visualizations where physical and digital
objects co-exist and interact in real time.
A cyberspace is a networked virtual reality.[3]
Simulated reality is a hypothetical virtual reality as truly immersive as the
actual reality, enabling an advanced lifelike experience or even virtual
eternity. It is most likely to be produced using a brain–computer interface
and quantum computing.

View-Master, a stereoscopic visual simulator, was introduced in 1939


The exact origins of virtual reality are disputed, partly because of how
difficult it has been to formulate a definition for the concept of an
alternative existence.[4] The development of perspective in Renaissance
Europe created convincing depictions of spaces that did not exist, in what
has been referred to as the "multiplying of artificial worlds".[5] Other
elements of virtual reality appeared as early as the 1860s. Antonin Artaud
took the view that illusion was not distinct from reality, advocating that
spectators at a play should suspend disbelief and regard the drama on
stage as reality.[2] The first references to the more modern concept of
virtual reality came from science fiction.

20th century

Morton Heilig wrote in the 1950s of an "Experience Theatre" that could


encompass all the senses in an effective manner, thus drawing the viewer
into the onscreen activity. He built a prototype of his vision dubbed the
Sensorama in 1962, along with five short films to be displayed in it while
engaging multiple senses (sight, sound, smell, and touch). Predating
digital computing, the Sensorama was a mechanical device. Heilig also
developed what he referred to as the "Telesphere Mask" (patented in
1960). The patent application described the device as "a telescopic
television apparatus for individual use...The spectator is given a complete
sensation of reality, i.e. moving three dimensional images which may be
in colour, with 100% peripheral vision, binaural sound, scents and air
breezes."[6]
In 1968, Ivan Sutherland, with the help of his students including Bob
Sproull, created what was widely considered to be the first head-mounted
display system for use in immersive simulation applications. It was
primitive both in terms of user interface and visual realism, and the HMD
to be worn by the user was so heavy that it had to be suspended from the
ceiling. The graphics comprising the virtual environment were simple
wire-frame model rooms. The formidable appearance of the device
inspired its name, The Sword of Damocles.
1970–1990
The virtual reality industry mainly provided VR devices for medical,
flight simulation, automobile industry design, and military training
purposes from 1970 to 1990.[7]
David Em became the first artist to produce navigable virtual worlds at
NASA's Jet Propulsion Laboratory (JPL) from 1977 to 1984.[8] The
Aspen Movie Map, a crude virtual tour in which users could wander the
streets of Aspen in one of the three modes (summer, winter, and
polygons), was created at the MIT in 1978.

NASA Ames's 1985 VIEW headset


In 1979, Eric Howlett developed the Large Expanse, Extra Perspective
(LEEP) optical system. The combined system created a stereoscopic
image with a field of view wide enough to create a convincing sense of
space. The users of the system have been impressed by the sensation of
depth (field of view) in the scene and the corresponding realism. The
original LEEP system was redesigned for NASA's Ames Research Center
in 1985 for their first virtual reality installation, the VIEW (Virtual
Interactive Environment Workstation) by Scott Fisher. The LEEP system
provides the basis for most of the modern virtual reality headsets.[9]

A VPL Research DataSuit, a full-body outfit with sensors for measuring


the movement of arms, legs, and trunk. Developed circa 1989. Displayed
at the Nissho Iwai showroom in Tokyo
By the 1980s, the term "virtual reality" was popularized by Jaron Lanier,
one of the modern pioneers of the field. Lanier had founded the company
VPL Research in 1985. VPL Research has developed several VR devices
like the DataGlove, the EyePhone, and the AudioSphere. VPL licensed
the DataGlove technology to Mattel, which used it to make the Power
Glove, an early affordable VR device.
Atari founded a research lab for virtual reality in 1982, but the lab was
closed after two years due to the Atari Shock (North American video
game crash of 1983). However, its hired employees, such as Tom
Zimmerman, Scott Fisher, Jaron Lanier, Michael Naimark, and Brenda
Laurel, kept their research and development on VR-related technologies.
In 1988, the Cyberspace Project at Autodesk was the first to implement
VR on a low-cost personal computer[10] [11] . The project leader Eric
Gullichsen left in 1990 to found Sense8 Corporation and develop the
WorldToolKit virtual reality SDK,[12] which offered the first real time
graphics with Texture mapping on a PC, and was widely used throughout
industry and academia.[13][14]
1990–2000
The 1990s saw the first widespread commercial releases of consumer
headsets. In 1992, for instance, Computer Gaming World predicted
"affordable VR by 1994".[15]
In 1991, Sega announced the Sega VR headset for arcade games and the
Mega Drive console. It used LCD screens in the visor, stereo headphones,
and inertial sensors that allowed the system to track and react to the
movements of the user's head.[16] In the same year, Virtuality launched
and went on to become the first mass-produced, networked, multiplayer
VR entertainment system that was released in many countries, including a
dedicated VR arcade at Embarcadero Center. Costing up to $73,000 per
multi-pod Virtuality system, they featured headsets and exoskeleton
gloves that gave one of the first "immersive" VR experiences.[17]

A CAVE system at IDL's Center for Advanced Energy Studies in 2010


That same year, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A.
DeFanti from the Electronic Visualization Laboratory created the first
cubic immersive room, the Cave automatic virtual environment (CAVE).
Developed as Cruz-Neira's PhD thesis, it involved a multi-projected
environment, similar to the holodeck, allowing people to see their own
bodies in relation to others in the room.[18][19] Antonio Medina, a MIT
graduate and NASA scientist, designed a virtual reality system to "drive"
Mars rovers from Earth in apparent real time despite the substantial delay
of Mars-Earth-Mars signals.[20]
Virtual Fixtures immersive AR system developed in 1992. Picture
features Dr. Louis Rosenberg interacting freely in 3D with overlaid
virtual objects called 'fixtures'
In 1992, Nicole Stenger created Angels, the first real-time interactive
immersive movie where the interaction was facilitated with a dataglove
and high-resolution goggles. That same year, Louis Rosenberg created
the virtual fixtures system at the U.S. Air Force's Armstrong Labs using a
full upper-body exoskeleton, enabling a physically realistic mixed reality
in 3D. The system enabled the overlay of physically real 3D virtual
objects registered with a user's direct view of the real world, producing
the first true augmented reality experience enabling sight, sound, and
touch.[21][22]
By 1994, Sega released the Sega VR-1 motion simulator arcade
attraction,[23][24] in SegaWorld amusement arcades. It was able to track
head movement and featured 3D polygon graphics in stereoscopic 3D,
powered by the Sega Model 1 arcade system board.[25] Apple released
QuickTime VR, which, despite using the term "VR", was unable to
represent virtual reality, and instead displayed 360 photographic
panoramas.
Nintendo's Virtual Boy console was released in 1995.[26] A group in
Seattle created public demonstrations of a "CAVE-like" 270 degree
immersive projection room called the Virtual Environment Theater,
produced by entrepreneurs Chet Dagit and Bob Jacobson.[27] Forte
released the VFX1, a PC-powered virtual reality headset that same year.
In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial
focus on the development of VR hardware. In its earliest form, the
company struggled to produce a commercial version of "The Rig", which
was realized in prototype form as a clunky steel contraption with several
computer monitors that users could wear on their shoulders. The concept
was later adapted into the personal computer-based, 3D virtual world
program Second Life.
21st century
The 2000s were a period of relative public and investment indifference to
commercially available VR technologies.
In 2001, SAS Cube (SAS3) became the first PC-based cubic room,
developed by Z-A Production (Maurice Benayoun, David Nahon), Barco,
and Clarté. It was installed in Laval, France. The SAS library gave birth
to Virtools VRPack. In 2007, Google introduced Street View, a service
that shows panoramic views of an increasing number of worldwide
positions such as roads, indoor buildings and rural areas. It also features a
stereoscopic 3D mode, introduced in 2010.
2010-present

An inside view of the Oculus Rift Crescent Bay prototype headset


In 2010, Palmer Luckey designed the first prototype of the Oculus Rift.
This prototype, built on a shell of another virtual reality headset, was only
capable of rotational tracking. However, it boasted a 90-degree field of
vision that was previously unseen in the consumer market at the time.
Distortion issues arising from the lens used to create the field of vision
were corrected for by software written by John Carmack for a version of
Doom 3. This initial design would later serve as a basis from which the
later designs came.[30] In 2012, the Rift is presented for the first time at
the E3 gaming trade show by Carmack.[31][32] In 2014, Facebook
purchased Oculus VR for what at the time was stated as $2 billion[33] but
later revealed that the more accurate figure was $3 billion.[32] This
purchase occurred after the first development kits ordered through
Oculus' 2012 Kickstarter had shipped in 2013 but before the shipping of
their second development kits in 2014.[34] ZeniMax, Carmack's former
employer, sued Oculus and Facebook for taking company secrets to
Facebook;[32] the verdict was in favour of ZeniMax, settled out of court
later.[35]
In 2013, Valve Corporation discovered and freely shared the
breakthrough of low-persistence displays which make lag-free and smear-
free display of VR content possible.[36] This was adopted by Oculus and
was used in all their future headsets. In early 2014, Valve showed off
their SteamSight prototype, the precursor to both consumer headsets
released in 2016. It shared major features with the consumer headsets
including separate 1K displays per eye, low persistence, positional
tracking over a large area, and fresnel lenses.[37][38] HTC and Valve
announced the virtual reality headset HTC Vive and controllers in 2015.
The set included tracking technology called Lighthouse, which utilized
wall-mounted "base stations" for positional tracking using infrared
light.[39][40][41]

The Project Morpheus (PlayStation VR) headset worn at gamescom 2015


In 2014, Sony announced Project Morpheus (its code name for the
PlayStation VR), a virtual reality headset for the PlayStation 4 video
game console.[42] In 2015, Google announced Cardboard, a do-it-yourself
stereoscopic viewer: the user places their smartphone in the cardboard
holder, which they wear on their head. Michael Naimark was appointed
Google's first-ever 'resident artist' in their new VR division. The
Kickstarter campaign for Gloveone, a pair of gloves providing motion
tracking and haptic feedback, was successfully funded, with over
$150,000 in contributions Also in 2015, Razer unveiled its open source
project OSVR.

Smartphone-based budget headset Samsung Gear VR in dismantled state


By 2016, there have been at least 230 companies developing VR-related
products. Amazon, Apple, Facebook, Google, Microsoft, Sony and
Samsung all had dedicated AR and VR groups. Dynamic binaural audio
was common to most headsets released that year. However, haptic
interfaces were not well developed, and most hardware packages
incorporated button-operated handsets for touch-based interactivity.
Visually, displays were still of a low-enough resolution and frame rate
that images were still identifiable as virtual
In 2016, HTC shipped its first units of the HTC Vive SteamVR
headset.[45] This marked the first major commercial release of sensor-
based tracking, allowing for free movement of users within a defined
space.[46] A patent filed by Sony in 2017 showed they were developing a
similar location tracking technology to the Vive for PlayStation VR, with
the potential for the development of a wireless headset.[47]
Technology
See also: Immersive technology

Software

The Virtual Reality Modelling Language (VRML), first introduced in


1994, was intended for the development of "virtual worlds" without
dependency on headsets.[48] The Web3D consortium was subsequently
founded in 1997 for the development of industry standards for web-based
3D graphics. The consortium subsequently developed X3D from the
VRML framework as an archival, open-source standard for web-based
distribution of VR content.[49] WebVR is an experimental JavaScript
application programming interface (API) that provides support for
various virtual reality devices, such as the HTC Vive, Oculus Rift,
Google Cardboard or OSVR, in a web browser.

Hardware

Paramount for the sensation of immersion into virtual reality are a high
frame rate (at least 95 fps), as well as a low latency
Modern virtual reality headset displays are based on technology
developed for smartphones including: gyroscopes and motion sensors for
tracking head, hand, and body positions; small HD screens for
stereoscopic displays; and small, lightweight and fast computer
processors. These components led to relative affordability for
independent VR developers, and lead to the 2012 Oculus Rift Kickstarter
offering the first independently developed VR headset.[44]
Independent production of VR images and video has increased by the
development of omnidirectional cameras, also known as 360-degree
cameras or VR cameras, that have the ability to record 360 interactive
photography, although at low-resolutions or in highly compressed
formats for online streaming of 360 video.[51] In contrast,
photogrammetry is increasingly used to combine several high-resolution
photographs for the creation of detailed 3D objects and environments in
VR applications.[52][53]
To create a feeling of immersion, special output devices are needed to
display virtual worlds. Well-known formats include head-mounted
displays or the CAVE. In order to convey a spatial impression, two
images are generated and displayed from different perspectives (stereo
projection). There are different technologies available to bring the
respective image to the right eye. A distinction is made between active
(e.g. shutter glasses) and passive technologies (e.g. polarizing filters or
Infitec).[citation needed]
Special input devices are required for interaction with the virtual world.
These include the 3D mouse, the wired glove, motion controllers, and
optical tracking sensors. Controllers typically use optical tracking
systems (primarily infrared cameras) for location and navigation, so that
the user can move freely without wiring. Some input devices provide the
user with force feedback to the hands or other parts of the body, so that
the human being can orientate himself in the three-dimensional world
through haptics and sensor technology as a further sensory sensation and
carry out realistic simulations. Additional haptic feedback can be
obtained from omnidirectional treadmills (with which walking in virtual
space is controlled by real walking movements) and vibration gloves and
suits.
Virtual reality cameras can be used to create VR photography using 360-
degree panorama videos. 360-degree camera shots can be mixed with
virtual elements to merge reality and fiction through special effects.[citation
needed]
VR cameras are available in various formats, with varying numbers
of lenses installed in the camera.
Applications[edit]
Main article: Applications of virtual reality
Apollo 11 astronaut Buzz Aldrin previewing the Destination: Mars VR
experience at the Kennedy Space Center Visitor Complex in 2016
Virtual reality is most commonly used in entertainment applications such
as video gaming and 3D cinema. Consumer virtual reality headsets were
first released by video game companies in the early-mid 1990s.
Beginning in the 2010s, next-generation commercial tethered headsets
were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR),
setting off a new wave of application development 3D cinema has been
used for sporting events, pornography, fine art, music videos and short
films. Since 2015, roller coasters and theme parks have incorporated
virtual reality to match visual effects with haptic feedback
In social sciences and psychology, virtual reality offers a cost- It can be
used as a form of therapeutic intervention. For instance, there is the case
of the virtual reality exposure therapy (VRET), a form of exposure
therapy for treating anxiety disorders such as post traumatic stress
disorder (PTSD) and phobias.
In medicine, simulated VR surgical environments under the supervision
of experts can provide effective and repeatable training at a low cost,
allowing trainees to recognize and amend errors as they occur Virtual
reality has been used in physical rehabilitation since the 2000s. Despite
numerous studies conducted, good quality evidence of its efficacy
compared to other rehabilitation methods without sophisticated and
expensive equipment is lacking for the treatment of Parkinson's disease A
2018 review on the effectiveness of mirror therapy by virtual Another
study was conducted that showed the potential for VR to promote
mimicry and revealed the difference between neurotypical and autism
spectrum disorder individuals in their response to a two-dimensional
avatar

U.S. Navy medic demonstrating a VR parachute simulator at the Naval


Survival Training Institute in 2010
VR can simulate real workspaces for workplace occupational safety and
health purposes, educational purposes, and training purposes. It can be
used to provide learners with a virtual environment where they can
develop their skills without the real-world consequences of failing. It has
been used and studied in primary education military, astronaut training
flight simulators miner training,[ architectural design driver training and
bridge inspection Immersive VR engineering systems enable engineers to
see virtual prototypes prior to the availability of any physical prototypes.
Supplementing training with virtual training environments has been
claimed to offer avenues of realism in military[74] and healthcare[75]
training while minimizing cost.[76] It also has been claimed to reduce
military training costs by minimizing the amounts of ammunition
expended during training periods.[74]
The first fine art virtual world was created in the 1970s. As the
technology developed, more artistic programs were produced throughout
the 1990s, including feature films. When commercially available
technology became more widespread, VR festivals began to emerge in
the mid-2010s. The first uses of VR in museum settings began in the
1990s, seeing a significant increase in the mid-2010s. Additionally,
museums have begun making some of their content virtual reality
accessible.[78][79]
Virtual reality's growing market presents an opportunity and an
alternative channel for digital marketing.[ It is also seen as a new
platform for e-commerce, particularly in the bid to challenge traditional
"brick and mortar" retailers. However, a 2018 study revealed that the
majority of goods are still purchased in physical stores
Concerns and challenges
Health and safety
There are many health and safety considerations of virtual reality. A
number of unwanted symptoms have been caused by prolonged use of
virtual reality,[82] and these may have slowed proliferation of the
technology. Most virtual reality systems come with consumer warnings,
including: seizures; developmental issues in children; trip-and-fall and
collision warnings; discomfort; repetitive stress injury; and interference
with medical devices.[83] Some users may experience twitches, seizures or
blackouts while using VR headsets, even if they do not have a history of
epilepsy and have never had blackouts or seizures before. As many as one
in 4,000 people may experience these symptoms. Since these symptoms
are more common among people under the age of 20, children are
advised against using VR headsets. Other problems may occur in physical
interactions with one's environment. While wearing VR headsets, people
quickly lose awareness of their real-world surroundings and may injure
themselves by tripping over, or colliding with real-world objects.[84]
VR headsets may regularly cause eye fatigue, as does all screened
technology, because people tend to blink less when watching screens,
causing their eyes to become more dried out.[85] There have been some
concerns about VR headsets contributing to myopia, but although VR
headsets sit close to the eyes, they may not necessarily contribute to
nearsightedness if the focal length of the image being displayed is
sufficiently far away.[

A virtual reality headset with a panoramic lens supposed to eliminate the


symptoms of motion sickness
Virtual reality sickness (also known as cybersickness) occurs when a
person's exposure to a virtual environment causes symptoms that are
similar to motion sickness symptoms Women are significantly more
affected than men by headset-induced symptoms, at rates of around 77%
and 33% respectively The most common symptoms are general
discomfort, headache, stomach awareness, nausea, vomiting, pallor,
sweating, fatigue, drowsiness, disorientation, and apathy For example,
Nintendo's Virtual Boy received much criticism for its negative physical
effects, including "dizziness, nausea, and headaches These motion
sickness symptoms are caused by a disconnect between what is being
seen and what the rest of the body perceives. When the vestibular system,
the body's internal balancing system, does not experience the motion that
it expects from visual input through the eyes, the user may experience VR
sickness. This can also happen if the VR system does not have a high
enough frame rate, or if there is a lag between the body's movement and
the onscreen visual reaction to it. Because approximately 25–40% of
people experience some kind of VR sickness when using VR machines,
companies are actively looking for ways to reduce VR sickness

Nanorobotics
Nanorobotics are an emerging technology field creating machines or
robots whose components are at or near the scale of a nanometer (10−9
meters)More specifically, nanorobotics (as opposed to microrobotics)
refers to the nanotechnology engineering discipline of designing and
building nanorobots, with devices ranging in size from 0.1–10
micrometres and constructed of nanoscale or molecular components. The
terms nanobot, nanoid, nanite, nanomachine, or nanomite have also been
used to describe such devices currently under research and development.

Nanomachines are largely in the research and development phase,[8] but


some primitive molecular machines and nanomotors have been tested. An
example is a sensor having a switch approximately 1.5 nanometers
across, able to count specific molecules in a chemical sample. The first
useful applications of nanomachines may be in nanomedicine. For
example biological machines could be used to identify and destroy cancer
cells Another potential application is the detection of toxic chemicals, and
the measurement of their concentrations, in the environment. Rice
University has demonstrated a single-molecule car developed by a
chemical process and including Buckminsterfullerenes (buckyballs) for
wheels. It is actuated by controlling the environmental temperature and
by positioning a scanning tunneling microscope tip.
Another definition is a robot that allows precise interactions with
nanoscale objects, or can manipulate with nanoscale resolution. Such
devices are more related to microscopy or scanning probe microscopy,
instead of the description of nanorobots as molecular machines. Using the
microscopy definition, even a large apparatus such as an atomic force
microscope can be considered a nanorobotic instrument when configured
to perform nanomanipulation. For this viewpoint, macroscale robots or
microrobots that can move with nanoscale precision can also be
considered nanorobots.

A ribosome is a biological machine.


According to Richard Feynman, it was his former graduate student and
collaborator Albert Hibbs who originally suggested to him (circa 1959)
the idea of a medical use for Feynman's theoretical micromachines (see
biological machine). Hibbs suggested that certain repair machines might
one day be reduced in size to the point that it would, in theory, be
possible to (as Feynman put it) "swallow the surgeon". The idea was
incorporated into Feynman's 1959 essay There's Plenty of Room at the
Bottom.
Since nanorobots would be microscopic in size, it would probably be
necessary[for very large numbers of them to work together to perform
microscopic and macroscopic tasks. These nanorobot swarms, both those
unable to replicate (as in utility fog) and those able to replicate
unconstrainedly in the natural environment (as in grey goo and synthetic
biology), are found in many science fiction stories, such as the Borg
nanoprobes in Star Trek and The Outer Limits episode "The New Breed".
Some proponents of nanorobotics, in reaction to the grey goo scenarios
that they earlier helped to propagate, hold the view that nanorobots able
to replicate outside of a restricted factory environment do not form a
necessary part of a purported productive nanotechnology, and that the
process of self-replication, were it ever to be developed, could be made
inherently safe. They further assert that their current plans for developing
and using molecular manufacturing do not in fact include free-foraging
replicators.[13][14]
A detailed theoretical discussion of nanorobotics, including specific
design issues such as sensing, power communication, navigation,
manipulation, locomotion, and onboard computation, has been presented
in the medical context of nanomedicine by Robert Freitas.[15][16] Some of
these discussions[which?] remain at the level of unbuildable generality and
do not approach the level of detailed engineering.
Legal and ethical implications[edit]
Open technology[edit]
A document with a proposal on nanobiotech development using open
design technology methods, as in open-source hardware and open-source
software, has been addressed to the United Nations General Assembly.[17]
According to the document sent to the United Nations, in the same way
that open source has in recent years accelerated the development of
computer systems, a similar approach should benefit the society at large
and accelerate nanorobotics development. The use of nanobiotechnology
should be established as a human heritage for the coming generations,
and developed as an open technology based on ethical practices for
peaceful purposes. Open technology is stated as a fundamental key for
such an aim.
In the same ways that technology research and development drove the
space race and nuclear arms race, a race for nanorobots is occurring.
There is plenty of ground allowing nanorobots to be included among the
emerging technologies.[23] Some of the reasons are that large
corporations, such as General Electric, Hewlett-Packard, Synopsys,
Northrop Grumman and Siemens have been recently working in the
development and research of nanorobots;[24][25][26][27][28] surgeons are
getting involved and starting to propose ways to apply nanorobots for
common medical procedures;[29] universities and research institutes were
granted funds by government agencies exceeding $2 billion towards
research developing nanodevices for medicine;[30][31] bankers are also
strategically investing with the intent to acquire beforehand rights and
royalties on future nanorobots commercialisation.[32] Some aspects of
nanorobot litigation and related issues linked to monopoly have already
arisen.[33][34][35] A large number of patents has been granted recently on
nanorobots, done mostly for patent agents, companies specialized solely
on building patent portfolios, and lawyers. After a long series of patents
and eventually litigations, see for example the Invention of Radio, or the
War of Currents, emerging fields of technology tend to become a
monopoly, which normally is dominated by large corporations.[36]
Manufacturing approaches[edit]
Manufacturing nanomachines assembled from molecular components is a
very challenging task. Because of the level of difficulty, many engineers
and scientists continue working cooperatively across multidisciplinary
approaches to achieve breakthroughs in this new area of development.
Thus, it is quite understandable the importance of the following distinct
techniques currently applied towards manufacturing nanorobots:

Biochip

Main article: Biochip


The joint use of nanoelectronics, photolithography, and new biomaterials
provides a possible approach to manufacturing nanorobots for common
medical uses, such as surgical instrumentation, diagnosis, and drug
delivery.[ This method for manufacturing on nanotechnology scale is in
use in the electronics industry since 2008.[40] So, practical nanorobots
should be integrated as nanoelectronics devices, which will allow tele-
operation and advanced capabilities for medical instrumentation.
Nubots

Main article: DNA machine


A nucleic acid robot (nubot) is an organic molecular machine at the
nanoscale.[43] DNA structure can provide means to assemble 2D and 3D
nanomechanical devices. DNA based machines can be activated using
small molecules, proteins and other molecules of DNA.[44][45][46]
Biological circuit gates based on DNA materials have been engineered as
molecular machines to allow in-vitro drug delivery for targeted health
problems.[47] Such material based systems would work most closely to
smart biomaterial drug system delivery,[48] while not allowing precise in
vivo teleoperation of such engineered prototypes.

Surface-bound systems

Several reports have demonstrated the attachment of synthetic molecular


motors to surfaces.[ These primitive nanomachines have been shown to
undergo machine-like motions when confined to the surface of a
macroscopic material. The surface anchored motors could potentially be
used to move and position nanoscale materials on a surface in the manner
of a conveyor belt.
Positional nanoassembly

Nanofactory Collaboration founded by Robert Freitas and Ralph Merkle


in 2000 and involving 23 researchers from 10 organizations and 4
countries, focuses on developing a practical research agenda[52]
specifically aimed at developing positionally-controlled diamond
mechanosynthesis and a diamondoid nanofactory that would have the
capability of building diamondoid medical nanorobots.
Biohybrids

The emerging field of bio-hybrid systems combines biological and


synthetic structural elements for biomedical or robotic applications. The
constituting elements of bio-nanoelectromechanical systems (BioNEMS)
are of nanoscale size, for example DNA, proteins or nanostructured
mechanical parts. Thiol-ene ebeam resist allow the direct writing of
nanoscale features, followed by the functionalization of the natively
reactive resist surface with biomolecules.[53] Other approaches use a
biodegradable material attached to magnetic particles that allow them to
be guided around the body.
Bacteria-based

This approach proposes the use of biological microorganisms, like the


bacterium Escherichia coli[55] and Salmonella typhimurium.[56] Thus the
model uses a flagellum for propulsion purposes. Electromagnetic fields
normally control the motion of this kind of biological integrated
device.[57] Chemists at the University of Nebraska have created a
humidity gauge by fusing a bacterium to a silicone computer chip

Virus-based

Retroviruses can be retrained to attach to cells and replace DNA. They go


through a process called reverse transcription to deliver genetic
packaging in a vector.[59] Usually, these devices are Pol – Gag genes of
the virus for the Capsid and Delivery system. This process is called
retroviral gene therapy, having the ability to re-engineer cellular DNA by
usage of viral vectors.[60] This approach has appeared in the form of
retroviral, adenoviral, and lentiviral gene delivery systems.[61] These gene
therapy vectors have been used in cats to send genes into the genetically
modified organism (GMO), causing it to display the trait. [62]
3D printing
Main article: 3D printing
3D printing is the process by which a three-dimensional structure is built
through the various processes of additive manufacturing. Nanoscale 3D
printing involves many of the same process, incorporated at a much
smaller scale. To print a structure in the 5-400 µm scale, the precision of
the 3D printing machine is improved greatly. A two-steps process of 3D
printing, using a 3D printing and laser etched plates method was
incorporated as an improvement technique.[63] To be more precise at a
nanoscale, the 3D printing process uses a laser etching machine, which
etches into each plate the details needed for the segment of nanorobot.
The plate is then transferred to the 3D printer, which fills the etched
regions with the desired nanoparticle. The 3D printing process is repeated
until the nanorobot is built from the bottom up. This 3D printing process
has many benefits. First, it increases the overall accuracy of the printing
process.[citation needed] Second, it has the potential to create functional
segments of a nanorobot.[63] The 3D printer uses a liquid resin, which is
hardened at precisely the correct spots by a focused laser beam. The focal
point of the laser beam is guided through the resin by movable mirrors
and leaves behind a hardened line of solid polymer, just a few hundred
nanometers wide. This fine resolution enables the creation of intricately
structured sculptures as tiny as a grain of sand. This process takes place
by using photoactive resins, which are hardened by the laser at an
extremely small scale to create the structure. This process is quick by
nanoscale 3D printing standards. Ultra-small features can be made with
the 3D micro-fabrication technique used in multiphoton
photopolymerisation. This approach uses a focused laser to trace the
desired 3D object into a block of gel. Due to the nonlinear nature of photo
excitation, the gel is cured to a solid only in the places where the laser
was focused while the remaining gel is then washed away. Feature sizes
of under 100 nm are easily produced, as well as complex structures with
moving and interlocked parts.[
Potential uses

Nanomedicine

Main article: Nanomedicine


Potential uses for nanorobotics in medicine include early diagnosis and
targeted drug-delivery for cancer,[65][66][67] biomedical instrumentation,[68]
surgery,[69][70] pharmacokinetics,[10] monitoring of diabetes,[71][72][73] and
health care.
In such plans, future medical nanotechnology is expected to employ
nanorobots injected into the patient to perform work at a cellular level.
Such nanorobots intended for use in medicine should be non-replicating,
as replication would needlessly increase device complexity, reduce
reliability, and interfere with the medical mission.
Nanotechnology provides a wide range of new technologies for
developing customized means to optimize the delivery of pharmaceutical
drugs. Today, harmful side effects of treatments such as chemotherapy
are commonly a result of drug delivery methods that don't pinpoint their
intended target cells accurately.[74] Researchers at Harvard and MIT,
however, have been able to attach special RNA strands, measuring nearly
10 nm in diameter, to nanoparticles, filling them with a chemotherapy
drug. These RNA strands are attracted to cancer cells. When the
nanoparticle encounters a cancer cell, it adheres to it, and releases the
drug into the cancer cell.[75] This directed method of drug delivery has
great potential for treating cancer patients while avoiding negative effects
(commonly associated with improper drug delivery).[74][76] The first
demonstration of nanomotors operating in living organism was carried
out in 2014 at University of California, San Diego.[77] MRI-guided
nanocapsules are one potential precursor to nanorobots.[78]
Another useful application of nanorobots is assisting in the repair of
tissue cells alongside white blood cells.[79] Recruiting inflammatory cells
or white blood cells (which include neutrophil granulocytes,
lymphocytes, monocytes, and mast cells) to the affected area is the first
response of tissues to injury.[80] Because of their small size, nanorobots
could attach themselves to the surface of recruited white cells, to squeeze
their way out through the walls of blood vessels and arrive at the injury
site, where they can assist in the tissue repair process. Certain substances
could possibly be used to accelerate the recovery.
The science behind this mechanism is quite complex. Passage of cells
across the blood endothelium, a process known as transmigration, is a
mechanism involving engagement of cell surface receptors to adhesion
molecules, active force exertion and dilation of the vessel walls and
physical deformation of the migrating cells. By attaching themselves to
migrating inflammatory cells, the robots can in effect “hitch a ride”
across the blood vessels, bypassing the need for a complex transmigration
mechanism of their own.[79]
As of 2016, in the United States, Food and Drug Administration (FDA)
regulates nanotechnology on the basis of size.[81]
Soutik Betal, during his doctoral research at the University of Texas,
San Antonio developed nanocomposite particles that are controlled
remotely by an electromagnetic field.[82] This series of nanorobots
that are now enlisted in the Guinness World Record,[82] can be used
to interact with the biological cells.[83] Scientists suggest that this
technology can be used for the treatment of cancer.

Unmanned vehicle
An unmanned vehicle or uncrewed vehicle is a vehicle without a person
on board. Uncrewed vehicles can either be remote controlled or remote
guided vehicles, or they can be autonomous vehicles which are capable
of sensing their environment and navigating on their own.

Types

• Unmanned ground vehicle (UGV), such as the autonomous cars, or


unmanned combat vehicles
• Unmanned aerial vehicle (UAV), unmanned aircraft commonly
known as a "drone"
o Unmanned combat aerial vehicle
o Miniature UAV
• Unmanned surface vehicle (USV), for the operation on the surface of
the water
• Unmanned underwater vehicle (UUV) sometimes known as
underwater drone, for the operation underwater
o Remotely operated underwater vehicle (ROUV)
o Autonomous underwater vehicle (AUV)
• Uncrewed spacecraft, both remote controlled ("uncrewed space
mission") and autonomous ("robotic spacecraft" or "space probe")
Gladiator Tactical Unmanned Ground Vehicle

An unmanned ground vehicle (UGV) is a vehicle that operates while in


contact with the ground and without an onboard human presence. UGVs
can be used for many applications where it may be inconvenient,
dangerous, or impossible to have a human operator present. Generally,
the vehicle will have a set of sensors to observe the environment, and will
either autonomously make decisions about its behavior or pass the
information to a human operator at a different location who will control
the vehicle through teleoperation.
The UGV is the land-based counterpart to unmanned aerial vehicles and
unmanned underwater vehicles. Unmanned robotics are being actively
developed for both civilian and military use to perform a variety of dull,
dirty, and dangerous activities.

RCA radio controlled car. Dayton, Ohio 1921

A working remote controlled car was reported in the October 1921 issue
of RCA's World Wide Wireless magazine. The car was unmanned and
controlled wirelessly via radio; it was thought the technology could
someday be adapted to tanks.[1] In the 1930s, the USSR developed
Teletanks, a machine gun-armed tank remotely controlled by radio from
another tank. These were used in the Winter War (1939-1940 ) against
Finland and at the start of the Eastern Front after Germany invaded the
USSR in 1941. During World War II, the British developed a radio
control version of their Matilda II infantry tank in 1941. Known as
"Black Prince", it would have been used for drawing the fire of concealed
anti-tank guns, or for demolition missions. Due to the costs of converting
the transmission system of the tank to Wilson type gearboxes, an order
for 60 tanks was cancelled.[2]
From 1942, the Germans used the Goliath tracked mine for remote
demolition work. The Goliath was a small tracked vehicle carrying 60 kg
of explosive charge directed through a control cable. Their inspiration
was a miniature French tracked vehicle found after France was defeated
in 1940. The combination of cost, low speed, reliance on a cable for
control, and poor protection against weapons meant it was not considered
a success.
The first major mobile robot development effort named Shakey was
created during the 1960s as a research study for the Defense Advanced
Research Projects Agency (DARPA). Shakey was a wheeled platform
that had a TV camera, sensors, and a computer to help guide its
navigational tasks of picking up wooden blocks and placing them in
certain areas based on commands. DARPA subsequently developed a
series of autonomous and semi-autonomous ground robots, often in
conjunction with the U.S. Army. As part of the Strategic Computing
Initiative, DARPA demonstrated the Autonomous Land Vehicle, the first
UGV that could navigate completely autonomously on and off roads at
useful speeds.

Design

Based on its application, unmanned ground vehicles will generally


include the following components: platform, sensors, control systems,
guidance interface, communication links, and systems integration features

Platform

The platform can be based on an all-terrain vehicle design and includes


the locomotive apparatus, sensors, and power source. Tracks, wheels, and
legs are the common forms of locomotion. In addition, the platform may
include an articulated body and some are made to join with other units.

Sensors

A primary purpose of UGV sensors is navigation, another is environment


detection. Sensors can include compasses, odometers, inclinometers,
gyroscopes, cameras for triangulation, laser and ultrasound range finders,
and infrared technology.

Control systems

Unmanned ground vehicles are generally considered Remote-Operated


and Autonomous, although Supervisory Control is also used to refer to
situations where there is a combination of decision making from internal
UGV systems and the remote human operator.

Guardium used by the Israel Defense Forces to operate as part of the


border security operations
Remote operated

A remote-operated UGV is a vehicle that is controlled by a human


operator via interface. All actions are determined by the operator based
upon either direct visual observation or remote use of sensors such as
digital video cameras. A basic example of the principles of remote
operation would be a remote controlled toy car.
Some examples of remote-operated UGV technology are:
• Unmanned Snatch Land Rover.[8]
• Frontline Robotics Teleoperated UGV (TUGV)[9]
• Gladiator Tactical Unmanned Ground Vehicle (used by the United
States Marine Corps)
• iRobot PackBot
• Unmanned ground vehicle Miloš used by Serbian Armed Forces
• Foster-Miller TALON
• Remotec ANDROS F6A
• Autonomous Solutions [10]
• Mesa Associates Tactical Integrated Light-Force Deployment
Assembly (MATILDA)
• Vecna Robotics Battlefield Extraction-Assist Robot (BEAR)
• G-NIUS Autonomous Unmanned Ground Vehicles (Israel Aerospace
Industries/Elbit Systems joint venture) Guardium
• Robowatch ASENDRO
• Ripsaw MS1
• DRDO Daksh
• VIPeR
• DOK-ING mine clearing, firefighting, and underground mining UGV's
• MacroUSA Armadillo V2 Micro UGV (MUGV) and Scorpion SUGV
• Nova 5
• Krymsk APC

Autonomous

A US Army Multifunctional Utility/Logistics and Equipment (MULE)


An autonomous UGV is essentially an autonomous robot that operates
without the need for a human controller on the basis of artificial
intelligence technologies. The vehicle uses its sensors to develop some
limited understanding of the environment, which is then used by control
algorithms to determine the next action to take in the context of a human
provided mission goal. This fully eliminates the need for any human to
watch over the menial tasks that the UGV is completing.
A fully autonomous robot may have the ability to:
• Collect information about the environment, such as building maps of
building interiors.
• Detect objects of interest such as people and vehicles.
• Travel between waypoints without human navigation assistance.
• Work for extended durations without human intervention.
• Avoid situations that are harmful to people, property or itself, unless
those are part of its design specifications
• Disarm, or remove explosives.
• Repair itself without outside assistance.
A robot may also be able to learn autonomously. Autonomous learning
includes the ability to:
• Learn or gain new capabilities without outside assistance.
• Adjust strategies based on the surroundings.
• Adapt to surroundings without outside assistance.
• Develop a sense of ethics regarding mission goals.
Autonomous robots still require regular maintenance, as with all
machines.
One of the most crucial aspects to consider when developing armed
autonomous machines is the distinction between combatants and
civilians. If done incorrectly, robot deployment can be detrimental. This
is particularly true in the modern era, when combatants often
intentionally disguise themselves as civilians to avoid detection. Even if a
robot maintained 99% accuracy, the number of civilian lives lost can still
be catastrophic. Due to this, it is unlikely that any fully autonomous
machines will be sent into battle armed, at least until a satisfactory
solution can be developed.
Some examples of autonomous UGV technology are:
• Vehicles developed for the DARPA Grand Challenge
• Autonomous car
• Multifunctional Utility/Logistics and Equipment vehicle
• Crusher developed by CMU for DARPA
Guidance interface
Depending on the type of control system, the interface between machine
and human operator can include joystick, computer programs, or voice
command.
Communication links
Communication between UGV and control station can be done via radio
control or fiber optics. It may also include communication with other
machines and robots involved in the operation.
Systems integration
Systems architecture integrates the interplay between hardware and
software and determines UGV success and autonomy.
Uses
There are a wide variety of UGVs in use today. Predominantly these
vehicles are used to replace humans in hazardous situations, such as
handling explosives and in bomb disabling vehicles, where additional
strength or smaller size is needed, or where humans cannot easily go.
Military applications include surveillance, reconnaissance, and target
acquisition.[7] They are also used in industries such as agriculture, mining
and construction.[13] UGVs are highly effective in naval operations, they
have great importance in the help of Marine Corps combat; they can
additionally avail in logistics operations on to the land and afloat
UGVs are also being developed for peacekeeping operations, ground
surveillance, gatekeeper/checkpoint operations, urban street presence and
to enhance police and military raids in urban settings. UGVs can "draw
first fire" from insurgents — reducing military and police casualties
Furthermore, UGVs are now being used in rescue and recovery mission
and were first used to find survivors following 9/11 at Ground Zero.

Space Applications

NASA's Mars Exploration Rover project includes two UGVs, Spirit and
Opportunity, that are still performing beyond the original design
parameters. This is attributed to redundant systems, careful handling, and
long-term interface decision making.[4] Opportunity (rover) and its twin,
Spirit (rover), six-wheeled, solar powered ground vehicles, were launched
in July 2003 and landed on opposite sides of Mars in January 2004. The
Spirit rover operated nominally until it became trapped in deep sand in
April 2009, lasting more than 20 times longer than expected Opportunity,
by comparison, has been operational for more than 12 years beyond its
intended lifespan of three months. Curiosity (rover) landed on Mars in
September 2011, and its original two-year mission has since been
extended indefinitely.

Civilian and commercial applications

Multiple civilian applications of UGVs are being implemented to


automatic processes in manufacturing and production environments They
have also been developed as autonomous tour guides for the Carnegie
Museum of Natural History and the Swiss National Exhibition Expo.

Agriculture

UGVs are one type of agricultural robot. Unmanned harvesting tractors


can be operated around the clock making it possible to handle short
windows for harvesting. UGVs are also used for spraying and thinning.
They can also be used to monitor the health of crops and livestock.

Manufacturing
In the manufacturing environment, UGVs are used for transporting
materials.[21] They are often automated and referred to as AGVs.
Aerospace companies use these vehicles for precision positioning and
transporting heavy, bulky pieces between manufacturing stations, which
are less time-consuming than using large cranes and can keep people
from engaging with dangerous areas.

Unmanned surface vehicles

Unmanned surface vehicles (USV; also known as Unmanned Surface


Vessels (USV) or Autonomous Surface Vehicles (ASV)) are boats that
operate on the surface of the water without a crew.
USVs are valuable in oceanography, as they are more capable than
moored or drifting weather buoys, but far cheaper than the equivalent
weather ships and research vessels and more flexible than commercial-
ship contributions. Wave gliders, in particular, harness wave energy for
primary propulsion[ and, with solar cells to power their electronics, have
months of marine persistence[ for both academic and naval applications.[
Powered USVs are popular for use in hydrographic survey. Using a small
USV in parallel to traditional survey vessels as a 'force-multiplier' can
double survey coverage and reduce time on-site. This method was used
for a survey carried out in the Bering Sea, off Alaska; the ASV Global 'C-
Worker 5' autonomous surface vehicle (ASV) collected 2,275 nautical
miles of survey, 44% of the project total. This was a first for the survey
industry and resulted in a saving of 25 days at sea.[8]
Military applications for USVs include powered seaborne targets and
minehunting.
In the future, many unmanned cargo ships are expected to cross the
waters.
Saildrone
Cognitive robotics
Cognitive robotics is concerned with endowing a robot with intelligent
behavior by providing it with a processing architecture that will allow it
to learn and reason about how to behave in response to complex goals in
a complex world. Cognitive robotics may be considered the engineering
branch of embodied cognitive science and embodied embedded
cognition.

Core issues
While traditional cognitive modeling approaches have assumed symbolic
coding schemes as a means for depicting the world, translating the world
into these kinds of symbolic representations has proven to be problematic
if not untenable. Perception and action and the notion of symbolic
representation are therefore core issues to be addressed in cognitive
robotics.

Starting point
Cognitive robotics views animal cognition as a starting point for the
development of robotic information processing, as opposed to more
traditional Artificial Intelligence techniques. Target robotic cognitive
capabilities include perception processing, attention
allocation, anticipation, planning, complex motor coordination, reasoning
about other agents and perhaps even about their own mental states.
Robotic cognition embodies the behavior of intelligent agents in the
physical world (or a virtual world, in the case of simulated cognitive
robotics). Ultimately the robot must be able to act in the real world.
Learning techniques
Motor Babble
Main article: Motor babbling
A preliminary robot learning technique called motor babbling involves
correlating pseudo-random complex motor movements by the robot with
resulting visual and/or auditory feedback such that the robot may begin
to expect a pattern of sensory feedback given a pattern of motor output.
Desired sensory feedback may then be used to inform a motor control
signal. This is thought to be analogous to how a baby learns to reach for
objects or learns to produce speech sounds. For simpler robot systems,
where for instance inverse kinematics may feasibly be used to transform
anticipated feedback (desired motor result) into motor output, this step
may be skipped.
Imitation
Once a robot can coordinate its motors to produce a desired result, the
technique of learning by imitation may be used. The robot monitors the
performance of another agent and then the robot tries to imitate that
agent. It is often a challenge to transform imitation information from a
complex scene into a desired motor result for the robot. Note that
imitation is a high-level form of cognitive behavior and imitation is not
necessarily required in a basic model of embodied animal cognition.
Knowledge acquisition
A more complex learning approach is "autonomous knowledge
acquisition": the robot is left to explore the environment on its own. A
system of goals and beliefs is typically assumed.
A somewhat more directed mode of exploration can be achieved by
"curiosity" algorithms, such as Intelligent Adaptive Curiosity[1][2] or
Category-Based Intrinsic Motivation.[3] These algorithms generally
involve breaking sensory input into a finite number of categories and
assigning some sort of prediction system (such as an Artificial Neural
Network) to each. The prediction system keeps track of the error in its
predictions over time. Reduction in prediction error is considered
learning. The robot then preferentially explores categories in which it is
learning (or reducing prediction error) the fastest.
Other architectures
Some researchers in cognitive robotics have tried using architectures such
as (ACT-R and Soar (cognitive architecture)) as a basis of their cognitive
robotics programs. These highly modular symbol-processing
architectures have been used to simulate operator performance and
human performance when modeling simplistic and symbolized laboratory
data. The idea is to extend these architectures to handle real-world
sensory input as that input continuously unfolds through time. What is
needed is a way to somehow translate the world into a set of symbols and
their relationships.

Questions
Some of the fundamental questions to still be answered in cognitive
robotics are:
• How much human programming should or can be involved to support
the learning processes?
• How can one quantify progress? Some of the adopted ways is the
reward and punishment. But what kind of reward and what kind of
punishment? In humans, when teaching a child for example, the
reward would be candy or some encouragement, and the punishment
can take many forms. But what is an effective way with robots?[citation
needed]

Evolutionary robotics


Evolutionary robotics (ER) is a methodology that uses evolutionary
computation to develop controllers and/or hardware for autonomous
robots. Algorithms in ER frequently operate on populations of candidate
controllers, initially selected from some distribution. This population is
then repeatedly modified according to a fitness function. In the case
of genetic algorithms (or "GAs"), a common method in evolutionary
computation, the population of candidate controllers is repeatedly grown
according to crossover, mutation and other GA operators and then culled
according to the fitness function. The candidate controllers used in ER
applications may be drawn from some subset of the set of artificial neural
networks, although some applications (including SAMUEL, developed at
the Naval Center for Applied Research in Artificial Intelligence) use
collections of "IF THEN ELSE" rules as the constituent parts of an
individual controller. It is theoretically possible to use any set of
symbolic formulations of a control law (sometimes called a policy in
the machine learning community) as the space of possible candidate
controllers. Artificial neural networks can also be used for robot
learning outside the context of evolutionary robotics. In particular, other
forms of reinforcement learning can be used for learning robot
controllers.
Developmental robotics is related to, but differs from, evolutionary
robotics. ER uses populations of robots that evolve over time, whereas
DevRob is interested in how the organization of a single robot's control
system develops through experience, over time.

Objectives
Evolutionary robotics is done with many different objectives, often at the
same time. These include creating useful controllers for real-world robot
tasks, exploring the intricacies of evolutionary theory (such as
the Baldwin effect), reproducing psychological phenomena, and finding
out about biological neural networks by studying artificial ones. Creating
controllers via artificial evolution requires a large number of evaluations
of a large population. This is very time consuming, which is one of the
reasons why controller evolution is usually done in software. Also, initial
random controllers may exhibit potentially harmful behaviour, such as
repeatedly crashing into a wall, which may damage the robot.
Transferring controllers evolved in simulation to physical robots is very
difficult and a major challenge in using the ER approach. The reason is
that evolution is free to explore all possibilities to obtain a high fitness,
including any inaccuracies of the simulation[citation needed]. This need for a
large number of evaluations, requiring fast yet accurate computer
simulations, is one of the limiting factors of the ER approach[citation needed].
In rare cases, evolutionary computation may be used to design the
physical structure of the robot, in addition to the controller. One of the
most notable examples of this was Karl Sims' demo for Thinking
Machines Corporation.

Motivation

Many of the commonly used machine learning algorithms require a set


of training examples consisting of both a hypothetical input and a desired
answer. In many robot learning applications the desired answer is an
action for the robot to take. These actions are usually not known
explicitly a priori, instead the robot can, at best, receive a value indicating
the success or failure of a given action taken. Evolutionary algorithms are
natural solutions to this sort of problem framework, as the fitness
function need only encode the success or failure of a given controller,
rather than the precise actions the controller should have taken. An
alternative to the use of evolutionary computation in robot learning is the
use of other forms of reinforcement learning, such as q-learning, to learn
the fitness of any particular action, and then use predicted fitness values
indirectly to create a controller.
Humanoid

Honda's ASIMO is an example of a humanoid robot.


A humanoid is something that has an appearance resembling a human
without actually being one. The earliest recorded use of the term, in 1870,
referred to indigenous peoples in areas colonized by Europeans. By the
20th century, the term came to describe fossils which were
morphologically similar, but not identical, to those of the human skeleton.
Although this usage was common in the sciences for much of the 20th
century, it is now considered rare.[1] More generally, the term can refer to
anything with distinctly human characteristics or adaptations, such as
possessing opposable anterior forelimb-appendages (i.e. thumbs), visible
spectrum-binocular vision (i.e. having two eyes), or biomechanic
plantigrade-bipedalism (i.e. the ability to walk on heels and metatarsals in
an upright position). Science fiction media frequently present sentient
extraterrestrial lifeforms as humanoid as a byproduct of convergent
evolution theory.

Although there are no known humanoid species outside the genus Homo,
the theory of convergent evolution speculates that different species may
evolve similar traits, and in the case of a humanoid these traits may
include intelligence and bipedalism and other humanoid skeletal changes,
as a result of similar evolutionary pressures. American psychologist and
Dinosaur intelligence theorist Harry Jerison suggested the possibility of
sapient dinosaurs. In a 1978 presentation at the American Psychological
Association, he speculated that dromiceiomimus could have evolved into
a highly intelligent species like human beings.[2] In his book, Wonderful
Life, Stephen Jay Gould argues that if the tape of life were re-wound and
played back, life would have taken a very different course.[3] Simon
Conway Morris counters this argument, arguing that convergence is a
dominant force in evolution and that since the same environmental and
physical constraints act on all life, there is an "optimum" body plan that
life will inevitably evolve toward, with evolution bound to stumble upon
intelligence, a trait of primates, crows, and dolphins, at some point.[4]

A model of the hypothetical Dinosauroid, Dinosaur Museum, Dorchester,


UK
In 1982, Dale Russell, curator of vertebrate fossils at the National
Museum of Canada in Ottawa, conjectured a possible evolutionary path
that might have been taken by the dinosaur Troodon had it not perished in
the Cretaceous–Paleogene extinction event 66 million years ago,
suggesting that it could have evolved into intelligent beings similar in
body plan to humans, becoming a humanoid of dinosaur origin. Over
geologic time, Russell noted that there had been a steady increase in the
encephalization quotient or EQ (the relative brain weight when compared
to other species with the same body weight) among the dinosaurs.[5]
Russell had discovered the first Troodontid skull, and noted that, while its
EQ was low compared to humans, it was six times higher than that of
other dinosaurs. If the trend in Troodon evolution had continued to the
present, its brain case could by now measure 1,100 cm3; comparable to
that of a human. Troodontids had semi-manipulative fingers, able to
grasp and hold objects to a certain degree, and binocular vision.[6]
Russell proposed that this "Dinosauroid", like most dinosaurs of the
troodontid family, would have had large eyes and three fingers on each
hand, one of which would have been partially opposed. As with most
modern reptiles (and birds), he conceived of its genitalia as internal.
Russell speculated that it would have required a navel, as a placenta aids
the development of a large brain case. However, it would not have
possessed mammary glands, and would have fed its young, as birds do,
on regurgitated food. He speculated that its language would have sounded
somewhat like bird song.[6][7]
Russell's thought experiment has been met with criticism from other
paleontologists since the 1980s, many of whom point out that his
Dinosauroid is overly anthropomorphic. Gregory S. Paul (1988) and
Thomas R. Holtz, Jr., consider it "suspiciously human" (Paul, 1988) and
Darren Naish has argued that a large-brained, highly intelligent
troodontid would retain a more standard theropod body plan, with a
horizontal posture and long tail, and would probably manipulate objects
with the snout and feet in the manner of a bird, rather than with human-
like "hands".[7]
In robotics

A humanoid robot is a robot that is based on the general structure of a


human, such as a robot that walks on two legs and has an upper torso, or
a robot that has two arms, two legs and a head. A humanoid robot does
not necessarily look convincingly like a real person, for example the
ASIMO humanoid robot has a helmet instead of a face.
An android (male) or gynoid (female) is a humanoid robot designed to
look as much like a real person as possible, although these words are
frequently perceived to be synonymous with humanoid.
While there are many humanoid robots in fictional stories, some real
humanoid robots have been developed since the 1990s, and some real
human-looking android robots have been developed since 2002.

You might also like