Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, March 11, 2016

How to build an artificially intelligent plant

The amazing wondering animal Ani Liu just asked me how I would start to build an artificially intelligent plant. That is a beautiful question! Here is what came up.


1. To begin, identify an area where a plant would meaningfully benefit from additional cognition. 

Here is an example: Plants probably cannot form maps of the environment, but only know gradients. That makes sense, because plants can only move along a gradient: I need more light, in this direction, it gets lighter —> grow in this direction. What about mapping the room with a camera and sensors, and measure where it would be good for the plant to be (light, temperature, commotion, air currents)? And then move there? What about finding out how the room changes during the day and identify an ideal trajectory in that room that you follow as a nomadic plant? What about measuring the need for water and nutrients, and actively seeking out a fountain and a shower with plant food when it gets hungry or thirsty?

Tuesday, January 26, 2016

Marvin Minsky

Marvin Minsky has passed away from us last night. Right before I got the sad news, I had spent an hour in a conversation with a friend in on the other side of the continent, discussing how Marvin has influenced our work and our way of looking at what makes us human.

(Marvin Minsky, me and Cosmo Harrigan, at his house)

Marvin Minsky was quite certainly the most influential thinker about the nature of our minds in recent history. When psychology focused on behavior because it had failed to develop methods to study mental processes, thought and intentionality, Marvin realized that minds are information processing systems, and we can make progress in understanding them by building computer models of these processes. Minsky became the founding father of this new computational science of the mind: Artificial Intelligence. 

Marvin Minsky did not think that minds are governed by a simple general principle, like neural learning, or homeostasis. The richness and depth of what makes us human requires an enormous complexity of cooperating and often self-regulating mechanisms, which Minsky started to address in his famous "Society of mind" theory. He pushed hard against approaches that he considered too simplistic, and inadvertently contributed to the schism between symbolic AI, which started out to address higher levels of cognition, and connectionist AI, which concentrated on learning, perception, and motor control.

AI has always been the pioneer battalion of computer science, but despite all its engineering successes, it is still far from explaining intelligence. We may have discovered many pieces of the puzzle, but we are still in the early stages of fitting them together.

Nothing is as strongly associated to the Massachusetts Institute of Technology as Artificial Intelligence and the person of Marvin Minsky himself. In the 60 years since Marvin started the field, he has inspired several generations of students and researchers to think about minds. Many of his inventions, and scientific contributions, his deep, yet very accessible books and his lectures were far ahead of their time, and have not lost any of their relevance.


Marvin Minsky's ideas led me, as many others, on my personal scientific quest. It has been an incredible honor to get to know him during the past year, and my thoughts are with his family and his friends.

I believe that today, the need for working on AI is as pressing as ever. Psychology is still unable to formulate and test theories, neuroscientists are entirely focused on nervous systems and not on minds. Artificial Intelligence is our best bet at understanding who we are, and it is time to continue Marvin's work, to recognize and describe the the richness of our minds, and to build machines that think, feel, perceive, learn, imagine and dream.

Saturday, January 31, 2015

Why Artificial Intelligence won't just be a bit smarter than humans

If building human-level Artificial Intelligence is possible, it will mean that we have solved the puzzle of the human mind. Replicating the puzzle pieces artificially and putting them together will not give us a system that is prone to the limits imposed by biological human brains: artificial brains will probably scale much better than biological ones, and our AIs will have more memory, more accuracy, more speed, more ability for integration of mental content, better problem solving capacity, practically infinite attention span and focusing abilities, and so on. 

Saturday, January 3, 2015

Jaron Lanier pisses me off

Jaron Lanier's piece on Edge ("The Myth of AI") has sparked quite a bit of discussion in my field, and he really pisses me off. He starts out with a straw man argument, by insinuating that suspect (and dominant, most wealthy!) parts of the tech culture believe that computers were people, and that this would be intrinsically linked to the idea that algorithms were equivalent to life.

I think that is utter rubbish. We may believe that people are computers, and that life is a set of algorithms (i.e., the inverse of what Lanier claims), but we are acutely aware that computers are simply devices for the creation of causal contexts that are stable enough to run software, and for the most part, we do not care about the metaphysics of biological systems.

Tuesday, May 28, 2013

Another Perversion of Human Thought Processes

My first word was... GOTO.

I grew up in an age of steam locomotives, cobblestone roads and Z80 processors with the clock speed of a cuckoo watch. The first language I ever learned was BASIC, and GOTO was the first command I came across. I still love the exotic, Japanese elegance of these four letters, and the subtle combination of  simplicity, power and unconditional brutality that is embodied in its application. GOTO does not purport any kind of metaphysical humdrum that is so commonplace in contemporary programming ("functions as first class objects", "syntactic closures" etc.). It's a down-to-the-metal, no-nonsense equivalent to how the processor changes the locus of execution from one portion of the memorized program to an arbitrary other position. GOTO is the teleportation spell of computer programming: when the computer reads a GOTO, it immediately and unconditionally teleports to the location stated after the GOTO (in BASIC, to an arbitrary programming line).

Monday, May 7, 2012

The Lambda Calculus for Absolute Dummies (like myself)

If there is one highly underrated concept in philosophy today, it is computation. Why is it so important? Because computationalism is the new mechanism. For millennia, philosophers have struggled when they wanted to express or doubt that the universe can be explained in a mechanical way, because it is so difficult to explain what a machine is, and what it is not. The term computation does just this: it defines exactly what machines can do, and what not. If the universe/the mind/the brain/bunnies/God is explicable in a mechanical way, then it is a computer, and vice versa.

Unfortunately, most people outside of programming and computer science don't know exactly what computation means. Many may have heard of Turing Machines, but these things tend to do more harm than good, because they leave strong intuitions of moving wheels and tapes, instead of what it really does: embodying the nature of computation.

(A Turing Machine, doing more harm than good. But very cool, nonetheless!)

The Lambda Calculus does exactly the same thing, but without wheels to cloud your vision. It might look frighteningly mathematical from a distance (it has a greek letter in it, after all!), so nobody outside of academic computer science tends to look at it, but it is unbelievably easy to understand. And if you understood it, you might end up with a much better intuition of computation.