Humanoid Robots
Humanoid Robots
Humanoid Robots
Related terms:
Humanoid Robots
David J. Bruemmer, Mark S. Swinson, in Encyclopedia of Physical Science and
Technology (Third Edition), 2003
1 INTRODUCTION
Humanoid robots are expected to exist and work in a close relationship with human
beings in the everyday world and to serve the needs of physically handicapped
people. These robots must be able to cope with the wide variety of tasks and
objects encountered in dynamic unstructured environments. Humanoid robots for
personal use for elderly and disabled people must be safe and easy to use. Therefore,
humanoid robots need a lightweight body, high flexibility, many kinds of sensors
and high intelligence. The successful introduction of these robots into human
environments will rely on the development of human friendly components.
The ideal end-effector for an artificial arm or a humanoid would be able to use the
tools and objects that a person uses when working in the same environment. The
modeling of a sophisticated hand is one of the challenges in the design of humanoid
robots and artificial arms. A lot of research activities have been carried out to develop
artificial robot hands with capabilities similar to the human hand. The hands require
many actuators to be dexterously moved [6,7]. However, the control system of
the humanoid robot becomes more complicated if more actuators are additionally
used for the hand design. This is a key aspect for the artificial arm because a
handicapped person might not be able to control a complex hand mechanism with
many actuators. For this reason, we propose to develop a lightweight hand driven
by a single actuator. To this end we adopted a new mechanism for the cooperative
movement of finger and palm joints.
1 Introduction
A humanoid robot has substantial advantages when working in environments where
human beings live. The main advantage is that a humanoid robot can act as human
beings in such an environment without any previous adjustment for the robot. On
the other hand, human friendly and functional machinery become more neccesary
as robots are used closer to human beings to care.
Based on the needs above mentioned, since 1998 fiscal year, AIST, which belongs
to MITI, has promoted the reserch and development project of “Humanoid and
Human Friendly Robotics System” as a part of the Industrial Science and Technology
Frontier Program(ISTF).
In the first term, from 1998 to 1999 fiscal year, platform systems as a common base
of the research and development has been developed. In the second term, from 2000
to 2002 fiscal year, various kinds of element technologies as for applications, in which
humanoid and human friendly robots are expected to be used, will be developed by
using the platform systems developed.
One of the most interesting research areas for neuroscience and robotics research
is the theory of motor learning in humans, and humanoid robots can effectively
be used to validate research hypotheses. What kinds of capability are needed for a
humanoid robot in such a research area? One of the most important properties is
that kinematics and dynamics are similar to humans, e.g., that weight, size, position
of the center of the mass, and hopefully the viscoelastic properties of the joints are
human-like. Equally important is the availability of sensory information to mimic
human proprioception and that joint torques can be produced to realize human
levels of performance, but also human limitations.
There are plans to extend the capabilities of this framework by developing a unified
human–robot system that work in pro-active (by preparing and suggesting outcome
schedule) and reactive aspect. The HRI should be safe, efficient, comprehensible
with maintain adequate proxemics.
The robot must be skilled to undertake collaborative task, in pro-active (by preparing
and suggesting outcome schedule) and reactive aspect. The robot should be allowed
to function and operate safely and logically, following ethics of the society. The com-
munication and joint action execution between the human and the robot indicates
a need to take action in order to implement cognitive skills in robots (Knoblich,
Butterfill, & Sebanz, 2011; Sebanz, Bekkering, & Knoblich, 2006). The actions are
making a collaborative objective which has been earlier ascertained and consented,
establishment of a realistic ecosystem in which the exteroceptive sensing skills of
a robot is supplemented by conclusion extracted from earlier observations and
creating a state of reality that include a deduced general knowledge, understanding
and behavior for both the robots as well as its human partner involved.
Applications
Erik T. Mueller, in Commonsense Reasoning (Second Edition), 2015
14.3 Vision
Murray Shanahan and David Randell use the event calculus to implement the
higher-level vision component of Ludwig, an upper-torso humanoid robot. Ludwig
has two arms, each with three degrees of freedom, and a stereo camera hooked up
to a head with two degrees of freedom (pan and tilt). The low-level vision component
uses off-the-shelf edge detection software to map raw images from the camera into
a list of edges. The list of edges is fed to the higher-level vision component, which
is responsible for recognizing shapes. This component is implemented using the
event calculus in Prolog.
The higher-level vision component consists of three layers. The first layer generates
hypotheses about what regions are in view, based on the input list of edges. The
second layer generates hypotheses about what aspects are in view, based on what
regions are in view. The aspects of a shape are the various ways it can appear when
viewed from different angles; for example, a wedge viewed from above appears to
be a rectangle, but a wedge viewed from the side appears to be a triangle. The third
layer generates hypotheses about what shapes are in view, based on the aspects that
are in view over time.
We present here a simplified version of Ludwig’s third layer. The predicate Arc(s, a1,
a2) represents that, by gradually changing the orientation of a shape s with respect
to the camera, it is possible for the appearance of s to change from aspect a1 to
aspect a2. For example, the appearance of a wedge can change from a rectangle
to a rectangle plus an adjacent triangle, but it cannot change immediately from a
rectangle to a triangle—it must first appear as a rectangle plus an adjacent triangle.
The predicate Shape(o, s) represents that object o has shape s. The fluent Aspect(o,
a) represents that the appearance of object o is aspect a. The event Change(o, a1, a2)
represents that the appearance of object o changes from aspect a1 to aspect a2.
We start with state constraints that say that an object has unique shape and aspect:
We have effect axioms that state that if the aspect of an object is a1, the shape of the
object is s, it is possible for the appearance of s to change from aspect a1 to aspect
a2, and the object changes from a1 to a2, then the aspect of the object will be a2 and
will no longer be a1:
Now, suppose we have the following visual knowledge about two shapes Shape1 and
Shape2:
We can then show that the shape of Object1 must be Shape1 and not Shape2:
Note that this inference depends crucially on the sequence of images over time
rather than on a single image. These sorts of inferences are supported by the ability
of the event calculus to reason about time.
Regardless of the technology used (i.e., virtual reality or robots), the user is gen-
erally equipped with a head-mounted display, which provides first-person visual
perspective from the virtual or robotic avatar. The user is able to see the limbs and
part of the body of their avatars if they look down, where they would normally see
their real limbs and torso. In addition, full-body identification can be achieved by
reflecting the avatar's appearance in physical and virtual mirrors or other surfaces
(Aymerich-Franch et al., 2016, 2015; Aymerich-Franch, Kizilcec, & Bailenson, 2014;
González-Franco, Pérez-Marcos, Spanlang, & Slater, 2010).
While feedback from other senses are not considered a necessary condition to induce
the illusion of avatar embodiment, it can contribute to enhance the embodiment
experience (Spanlang et al., 2014). Headsets or speakers are used to provide auditory
feedback and haptic devices for force feedback, or object controlling is used for haptic
feedback (Fox, Arena, & Bailenson, 2009; Stone, 2001). Olfaction and gustation are
generally not implemented.
User movements can be tracked and synchronized to the avatar's movements for the
control of limb and body gestures and to make the avatar walk or move in the space.
In virtual reality, user's body movements are generally tracked and synchronized to
the avatar body movements, and spaces are rendered according to these movements
(Fox et al., 2009; Spanlang et al., 2014). For robot avatars, control of the robot body
movement can be obtained with a motion capture suit (Aymerich-Franch, Kishore,
& Slater, 2019), a joystick (Aymerich-Franch et al., 2015, 2016), a brain–computer
interface (Alimardani et al., 2013; Gergondet et al., 2011), fMRI (Cohen et al., 2012,
2014), or eye-tracking technologies (Kishore et al., 2014).
The robot needs to estimate the user's level of interest in the topic, and the human's
proximity and gaze both are important for this. The robot also integrates nodding,
gesturing and its body posture with its speech during the conversation. Issues in
synchronizing gestures with speech are discussed by [24].
Assessing the level of interest of the user has two sides: how to detect whether the
human partner is interested in the topic or not, and what the system should do as
a result. Detecting the level of interest is part of the system's external interface, and
deciding what to do about it is part of the system's internal management strategy.
In order to assess the interest level correctly, the external interface should not be
limited to verbal feedback, but should include intonation, eye-gaze, gestures, body
language and other factors. The internal strategy for reacting appropriately must
decide what to do not only if the user is clearly interested or clearly not interested,
but also how to continue when the interest level is unclear, which may be a more
difficult decision.
Information about the evaluation of the Nao robot system based on the recorded
user testing sessions at the 8th International Summer Workshop on Multimodal
Interfaces, Metz, 2012 is given in [9] and [3].
Conclusion
Emotion research suggests that, to help learners with various challenges, making
efforts to build learners’ positive affect should be an essential, preliminary step to any
type of instructional support. Tools, such as virtual peers and humanoid robots, are
relatively free from the biases in the real world, so they might provide a safer learning
environment for students who can be marginalized in regular education classrooms
for various reasons. Also, by adding some affective pedagogical strategies, the
relatively simple tool of instructional videos can be utilized to enhance social and
affective qualities in online learning.
Virtual peers, robots, and videos can be effective and affective tools when they are
designed carefully with awareness of affective influence on users. When creating
programs with virtual peers, designers may keep in mind the power of a peer
to positively engage learners and enhance learning experiences. When designing
applications for educational robots, designers should consider how affect opens up
several exciting opportunities for further exploration and innovation. It seems that
affect is more related to the robot’s capacity to replicate human behavior than a
robot’s capacity to replicate a human’s physical abilities. Last but not least, when
incorporating videos into online learning, instructors should first be aware of the
affordance and limitation of the medium, as well as of the research in this area.
Unless instructors take care to ensure that students are engaged on an affective
level, instructors who use online videos may see their students’ positive attitudes
decrease. It should be further explored how online videos can be deliberately de-
signed to increase positive student affect. The case for doing so is strong, but the
research on precisely how to do so is still weak. As our study suggests, the use of
relationship-building strategies may help.
Overall, it is clear that rendering social and relational learning contexts to educa-
tional tools is feasible and crucial to success in technology-based learning. Building
from of our initial findings, subsequent research needs to be done on how those
technologies can mediate and encourage positive learning experiences in both
one-on-one and group settings.