ROBOTICS STS Report
ROBOTICS STS Report
ROBOTICS STS Report
Academia also made much progress in the creation new robots. In 1958 at the Stanford Research
Institute, Charles Rosen led a research team in developing a robot called "Shakey." Shakey was far
more advanced than the original Unimate, which was designed for specialized, industrial
applications. Shakey could wheel around the room, observe the scene with his television "eyes,"
move across unfamiliar surroundings, and to a certain degree, respond to his environment. He was
given his name because of his wobbly and clattering movements.
Shakey was the first mobile robot that could think independently and interact with its surroundings
Conceived as a demonstration project for the Advanced Research Projects Agency (ARPA) artificial
Intelligence program.
Shakey could be given a task such as finding a box of a given size, shape, and color and told to
move it to a designated position.
Shakey was able to search for the box in various rooms, cope with obstacles, and plan a suitable
course of action.
While Shakey was a success in some respects it was a great failure as far as autonomy was
concerned...
It was controlled by an off-board computer
It could only detect the baseboards of the special rooms it worked in
It could not deal with an unconstrained environment
It was really slow!
https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/robotics/history.html
https://www.youtube.com/watch?v=hxsWeVtb-JQ
Robots can only do what they are told to do – they can’t improvise
This means that safety procedures are needed to protect humans and other
robots
Although robots can be superior to humans in some ways, they are less dextrous
than humans, they don’t have such powerful brains, and cannot compete with a
human’s ability to understand what they can see.
Often robots are very costly – in terms of the initial cost, maintenance, the need for
extra components and the need to be programmed to do the task.
Whilst this is pretty amazing, you have to admit it’s kind of terrifying too. There
have been a lot of concerns surrounding the AI developments in recent years.
Artificial Intelligence is moving out from the sci-fi genre into the real world. The
same problems they were worried about in i-Robot seem to be just as relevant
in real life. Elon Musk and Stephen Hawking have spoken openly about their
anxieties concerning the safety of AI. Not only is there the obvious issue of
unemployment (look at all those jobs they can do), but there’s also the issue
of robot ethics. Ethics from an overall social perspective but also ethics that
surround individual robots. There’s so much to get into that I could write an
entire essay on the issue but I’m here to keep things interesting for you so I’m
just going to raise some questions and give us all something to think about in
this AI revolution.
Unemployment
So what happens when robots can take over so many human jobs that people
are left unemployed and unable to support their families? Will there be new
jobs created for humans? What happens when the robots learn everything
they need to know? Maybe society will change completely into an era where
people can survive from working within their communities and with their
family. Or maybe humans will simply have to find other jobs. Though, that
might not be so simple. The biggest question that can only be answered with
complete foresight is that, when we live in a future where robots are
performing jobs that few people actually enjoyed, will we be thankful and
recognise how much we weren’t supposed to be working our lives away? But
who knows?
Inequality
If most jobs are distributed amongst AI instead of humans, will revenue be
appointed to fewer people? Won’t only tech companies and AI specialists
reap all the benefits? If we’re looking at a post-work society, how do we
structure a fair post-labour economy?
Humanity
More and more robots that are being built today can hold proper
conversations. They utilize almost simple technology that is often used on the
internet to draw people’s attention. They have an ability to develop
relationships with other robots and humans based on information they
continuously learn and from information they have easy access to (WIFI
connections to clouds of knowledge). If we can have proper conversations
and interact in human ways with robots, what does this mean for society? Can
robots be programmed to manipulate people? Can robots and humans work
together cohesively? Can we use socially advanced robots to help enhance
company culture?
Artificial error
Robots are amazing and can be near perfect, but they aren’t always
completely defect free. There is always the possibility of Artificial Intelligence
not being so intelligent. When met with unknown situations robots will likely
make mistakes. Depending what kind of work a robot is responsible for that
mistake could be small or it could be severe. So, the question I propose here
is who takes responsibility for a robot’s errors? Is it the company that it works
for or the company that constructed and programmed the robot (if those
companies are separate)? Is it the person from whom the robot learned from?
What happens to the robot after it has made a severe error? What policies will
be in place to make sure the same mistake isn’t repeated?
This topic brings up the idea of robot consciousness. How do we clarify what
exactly is a mistake when a robot is taught to judge by logic? For example, if a
self-driving car is met with the choice of running over a pedestrian or severely
harming the passengers within the vehicle, which will it choose? And if it
chooses, is it then an accident? Again, who is responsible for the robot’s
actions? How will a robot know and learn what is right and what is wrong,
when even that line is sometimes blurred for humans? I guess we really have
to trust the people making these machines, don’t we?
Security
Will cyber security move forward as fast as AI technology? It would be a
disaster to let the systems and programs of AI be infiltrated by ill intentions.
Conclusion
As you can see, there are some big questions that need answering for most
humans to feel comfortable with advanced AI. Elon Musk publicly tweeted in
response to Sophia’s comment telling humans not to listen to him and his
concerns ‘Just feed it The Godfather movies as input. What’s the worst that
could happen?’. Which pretty much speaks to the major concern that we have
no idea what these robots could pick up and learn from and whether they
would be able to recognise ‘being bad’. Elon Musk has built a non-for-profit
company, OpenAI, that focuses on research in creating safe AI. There’s no
denying that AI will continue to grow and become more and more useful in
reality. But I think it’s wise to understand as humans, and as creators, we
have to have more than just the right intentions. We have to work hard to build
AI that will help more than it will ever hinder.
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2)
A robot must obey the orders given it by human beings, except where such orders would conflict with
the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict
2)
XAI is relevant now because it explains to us the black box AI models and helps humans to perceive
how AI models work.
4 James Watt
5 Mechanical horse
6 Pre-History of Real-World Robots:
The earliest remote control vehicles were built by Nikola Tesla in the 1890's.Tesla is best known as the inventor of AC power,
induction motors, Tesla coils, and other electrical devices.
Other early robots (1940's - 50's) were Grey Walter’s “Elsie the Tortoise” ("Machina speculatrix") and the Johns Hopkins
"beast."History of Real-World Robots:
15 Grey Walter's tortoise, restored recently by Owen Holland and fully operational
What are robots made of?Sensors: Light SensorsThis robot was developed by Grey Walter, a prominent neurophysist, in the 1940s-
50s. I had two neurons but could do a lot with just those two. It could search out a light source to recharge its battery cells and wiggle
if it got stuck. Grey Walter also trained it by blowing a whistle and immediately kicking. After the fifth or sixth kick, the robot would
move back from the imagined obstacle it was about to it.Grey Walter’s Tortoise
Isaac asimov is the father of the laws of robotics, By publishing his three Laws of Robotics in 1942, Isaac Asimov
defined rules for humans and robots to coexist which are more relevant today than ever before