0% found this document useful (0 votes)
40 views5 pages

Entrevista

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views5 pages

Entrevista

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

(12) "Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview -

YouTube

https://www.youtube.com/watch?v=qrvK_KuIeJk

Transcript:

(00:02) whether you think artificial intelligence will save the world or
end it you have Jeffrey Hinton to thank Hinton has been called The
Godfather of AI a British computer scientist whose controversial ideas
help make advanced artificial intelligence possible and so change the
world Hinton believes that AI will do enormous good but tonight he
has a warning he says that AI systems may be more intelligent than
we know and there's a chance the machines could take over which
made us ask the question the story will continue in a

(00:47) moment does Humanity know what it's doing no um I think


we're moving into a period when for the first time ever we may have
things more intelligent than us you believe they can understand yes
you believe they are intelligent yes you believe these systems have
experiences of their own and can make decisions based on those
experiences in the same sense as people do yes are they conscious I
think they probably don't have much self-awareness at present so in
that sense I don't think they're conscious will they have self-
awareness

(01:29) consciousness I oh yes I think they will in time and so human


beings will be the second most intelligent beings on the planet yeah
Jeffrey Hinton told us the artificial intelligence he set in motion was an
accident born of a failure in the 1970s at the University of Edinburgh
he dreamed of simulating a neural network on a computer simply as a
tool for what he was really studying the human brain but back then
almost no one thought software could mimic the brain his PhD advisor
told him to drop it before it ruined his career Hinton

(02:13) says he failed to figure out the human mind but the long
Pursuit led to an artificial version it took much much longer than I
expected it took like 50 years before it worked well but in the end it
did work well at what point did you you realize that you were right
about neural networks and most everyone else was wrong I always
thought I was right in 2019 Hinton and collaborators Yan laon on the
left and yosua Beno won the touring award the Nobel Prize of
computing to understand how their work on artificial neural networks
helped
(02:56) machines learn to learn let us take you to a a game look at
that oh my goodness this is Google's AI lab in London which we first
showed you this past April Jeffrey Hinton wasn't involved in this soccer
project but these robots are a great example of machine learning the
thing to understand is that the robots were not programmed to play
soccer they were told to score they had to learn how on their own oh
go in general here's how AI does it Henton and his collaborators
created software in layers with each layer

(03:42) handling part of the problem that's the so-called neural


network but this is the key when for example the robot scores a
message is sent back down through all of the layers that says that
pathway was right likewise when an answer is wrong that message
goes down through the network so correct connections get stronger
wrong connections get weaker and by trial and error the machine
teaches itself you think these AI systems are better at learning than
the human mind I think they may be yes and at present they're quite
a lot smaller

(04:23) so even the biggest chatbots only have about a trillion


Connections in them the human brain has about 100 trillion and yet in
the trillion Connections in a chatot it knows far more than you do in
your 100 trillion connections which suggests it's got a much better
way of getting knowledge into those connections a much better way
of getting knowledge that isn't fully understood we have a very good
idea of sort of roughly what it's doing but as soon as it gets really
complicated we don't actually know

(04:55) what's going on anymore than we know what's going on in


your brain what do you mean we don't know exactly how it works it
was designed by people no it wasn't what we did was we designed
the learning algorithm that's a bit like designing the principle of
evolution but when this learning algorithm then interacts with data it
produces complicated neural networks that are good at doing things
but we don't really understand exactly how they do those things what
are the implications of these systems

(05:27) autonomously writing their own computer code and executing


their own computer code that's a serious worry right so one of the
ways in which these systems might Escape control is by writing their
own computer code to modify themselves and that's something we
need to seriously worry about what do you say to someone who might
argue if the systems become benevolent just turn them off they will
be able to manipulate people right and these will be very good at
convincing people because they'll have learned from all the novels
that

(06:02) were ever written all the books by makavelli all the political
connives they'll know all that stuff they'll know how to do it knowhow
of the human kind runs in Jeffrey hinton's family his ancestors include
mathematician George buou who invented the basis of computing
and George Everest who surveyed India and got that mountain named
after him but as a boy Hinton himself could never climb the peak of
expectations raised by a domineering father every morning when I
went to school he'd actually say to me

(06:44) as I walked down the driveway get in their pitching and


maybe when you're twice as old as me you'll be half as good dad was
an authority on Beatles he knew a lot more about beatles than he
knew about people did you feel that as a child a bit yes when he died
we went to his study at the University and the walls were lined with
boxes of papers on different kinds of beetle and just near the door
there was a slightly smaller box that simply said not insects and that's
where he had all the things about the

(07:18) family today at 75 Hinton recently retired after what he calls


10 happy years at Google now he's professor ameritus at the
University of Toronto and he happened to mention he has more
academic citations than his father some of his research led to
chatbots like Google's Bard which we met last spring confounding
absolutely confounding we asked Bard to write a story from six words
for sale baby shoes never worn holy cow the shoes were a gift from
my wife but we never had a baby Bard created a deeply human tale
of a man

(08:03) whose wife could not conceive and a stranger who accepted
the shoes to heal the pain after her miscarriage I am rarely
speechless I don't know what to make of this chatbots are said to be
language models that just predict the next most likely word based on
probability you'll hear people saying things like they're just doing
autocomplete they're just trying to predict the next word and they're
just using statistics well it's true they're just trying to predict the next
word but if you think about it to predict the next

(08:39) word you have to understand the sentences so the idea they
just predicting the next word so they're not intelligent is crazy you
have to be really intelligent to predict the next word really accurately
to prove it Hinton showed us a test he devised for chat gp4 the
chatbot from a company called open AI it was sort of reassuring to
see a turing Award winner mistype and blame the computer oh damn
this thing we're going to go back and start again that's okay hinton's
test was a riddle about house painting an answer would demand

(09:18) reasoning and planning this is what he typed into chat gp4
the rooms in my house are painted white or blue or yellow and yellow
paint Fades to White within a year in 2 years time I'd like all the rooms
to be white what should I do the answer began in one second gp4
advised the rooms painted in blue need to be repainted the rooms
painted in yellow don't need to be repainted because they would Fade
to White before the deadline and oh I didn't even think of that it
warned if you paint the yellow rooms white there's

(10:01) a risk the color might be off when the yellow Fades besides it
advised you'd be wasting resources painting rooms that were going to
Fade to White anyway you believe that chat GPD 4 understands I
believe it definitely understands yes and in 5 years time I think in 5
years time it may well be able to reason better than us reasoning that
he says is leading to ai's great risks and great benefits so an obvious
area where there's huge benefits is Healthcare AI is already
comparable with Radiologists at understanding what's going on in

(10:42) medical images it's going to be very good at designing drugs


it already is designing drugs so that's an area where it's almost
entirely going to do good I like that area the risks are what well the
risks are having a whole class of people who are unemployed and not
valued much because what they what they used to do is now done by
machines other immediate risks he worries about include fake news
unintended bias in employment and policing and autonomous
Battlefield robots what is a path forward that

(11:25) ensures safety I don't know I I can't see a path that


guarantees safety that we're entering a period of great uncertainty
where we're dealing with things we've never dealt with before and
normally the first time you deal with something totally novel you get
it wrong and we can't afford to get it wrong with these things can't
afford to get it wrong why well because they might take over take
over from Humanity yes that's a possibility why would they want I'm
not saying it will happen if we

(11:55) could stop them ever wanting to that would be great but it's
not clear we can stop stop them ever wanting to Jeffrey Hinton told us
he has no regrets because of ai's potential for good but he says now
is the moment to run experiments to understand AI for governments
to impose regulations and for a world treaty to ban the use of military
robots he reminded us of Robert Oppenheimer who after inventing the
atomic bomb camp campaigned against the hydrogen bomb a man
who changed the world and found the world Beyond his

(12:37) control it may be we look back and see this as a kind of


Turning Point when Humanity had to make the decision about whether
to develop these things further and what to do to protect themselves
if they did um I don't know I think my main message is there's
enormous uncertainty about what's going to happen next these things
do understand and because they understand we need to think hard
about what's going to happen next and we just don't know

You might also like