Subject: AI
Date: Tue, 29 Feb 2000 16:21:15 -0800
From: "david shultz" <the_only_dude@chickmail.com>
To:   johnath@psych.utoronto.ca


I've been thinking about AI, and the more i think about it, the more puzzled i become. Is it a computer that can think for itself
that we are striving for? because if so, we have already found it. Out of a list of tasks computers can choose which to do based on
importance or other factors. That is exactly what people do as well, because all of our actions move toward whichever goal has
stronger motivation. It seems to me like the main difference between us and computers is that they don't have emotions, and that
they aren't aware of their own existence (or any existence for that matter). as for emotions, those could be represented by data,
couldn't they, as far as we are concerned, and as for awareness, is that important?


Subject: Re: AI
Date: Tue, 29 Feb 2000 20:03:35 -0500
From: Johnathan Nightingale <johnath@psych.utoronto.ca>
To: david shultz <the_only_dude@chickmail.com>

>I've been thinking about AI, and the more i think about it, the more puzzled i become. Is it a computer that can think for itself
>that we are striving for? because if so, we have already found it. Out of a list of tasks computers can choose which to do based on
>importance or other factors. That is exactly what people do as well, because all of our actions move toward whichever goal has
>stronger motivation. It seems to me like the main difference between us and computers is that they don't have emotions, and that
>they aren't aware of their own existence (or any existence for that matter). as for emotions, those could be represented by data,
>couldn't they, as far as we are concerned, and as for awareness, is that important?

Well, I couldn't be happier to hear that you're thinking about what I do, but the problems are a little thornier than you make them
out to be, I'm just gonna brain dump here, but I'll try to stay on topic.  It's long, but please read the whole thing...

First off, AI really breaks down into two fields, Hard AI and Soft AI.  Hard AI's job is to do the whole shebang, to build computers
that think like we do, to use computers to understand how our own minds work, and to create intelligent agents out of machines.
Soft AI, on the other hand, is more down to earth.  They're interested in making special systems with human-like properties, because
it would be convenient for us to have something that was "intelligent" in some looser sense of the word.  For instance, there are a
bunch of teams working on making more "intelligent" search engines for the net, by which they mean they want one that does a better
job of figuring out what you want (www.google.com is an attempt to do this,
and I must say, I get much higher quality results from searching there, than from searching with a "dumb" search engine that just
boasts a larger database of sites).  Soft AI is also where people work on writing things like scheduling programs for major
airlines; programs that have to keep track of an enormous amount of data, and make intelligent decisions according to a bunch of
conditions and changing information.  Yet another example is expert systems for fields like Medical Diagnosis, where the goal is to
write a program that takes the input of hundreds of doctors, and uses that to diagnose illnesses.  All these areas have been very
successful, and they are AI, but not quite in the way you seem to mean.  Normally when
people think about AI, they think about things like the robots in books, or the AI from The Matrix - that's Hard AI.

Hard AI is also about two things - some people just want to build an intelligence for it's own sake, and some want to build an
intelligence *like our own* so that we can, in the process, learn to understand how our minds work.  Now getting to your first
questions, I'll explain where computers are today.  You're right when you say that if you give a computer a list of tasks to
accomplish, and some rules for deciding which tasks to do in which order, and which are more "important", then the computer can
manage that.  In fact, that's what your operating system does when it's running programs, every program has a "Priority" setting,
and the computer makes sure high priority programs execute before low priority ones.  But there's
much more to intelligence than that - and this is a telling example because *you have to tell the computer which tasks are
important, or at least, how to decide for itself*.  The computer isn't thinking, per se, it's just taking the rules you gave it, and
applying them blindly.

Moreover, you use the sentence "That is exactly what people do as well, because all of our actions move toward whichever goal has
stronger motivation".  I think this sentence oversimplifies thinking a great deal.  First of all, consider thought that results in
no action.  Like, when you were thinking about AI, for example, that wasn't about seeking out a goal in the world, and taking a
course of action that would lead you there, it was entirely internal thought, with no external behavioural component.  Yes, it
eventually led to you sending me email, but it didn't have to, it could just have been thinking that you did, with no goal in mind,
and with no behaviour produced.  Also, people are not provided with a list of possible
actions, and a set of rules for determining the most important.  We generate those ourselves, and if you think that's an easy task,
you're not looking hard enough.  :)  Determining what you *can* do in a situation is very difficult if you don't have prior
instructions.  I mean, in chess, for example, the moves you can make are stated, the legal positions are stated, the rules for who
moves when are stated.  The game still requires intelligence, because some moves are definitely better than others, and the rules of
the game do NOT specify that, but with enough effort on the part of people, a computer can be given rules that determine the value
of a move, and because of that, we have computers like Deep Blue which, after
decades of human intelligence working on the problem, finally produced a computer that can play chess as well as any human. That's
pretty impressive, and so complex that one is persuaded to call it intelligent, but Deep Blue still accomplishes what it does
through brute force, it's built with 256 processors working massively in parallel considering close to 200,000 moves per second.
When you interview chess masters about how they play, that doesn't seem to be it, and our intuitions about how the mind works tell
us that this is probably not how people do it, we were never fed thousands of lines of instructions for choosing good moves, we just
learned them by playing, and by thinking, and when we play, we don't consider
millions of moves, we look at only those which are promising to us.  One chess expert was asked how many moves he considered on
average, he said "Only one, but it's always the right one." :)

So in a nice, formal game like chess, it's (relatively) easy to teach a computer to play, and maybe that's intelligence, maybe it
isn't.  But compare it to some of the things people do.  What are the rules for interacting with your friends?  Or choosing a
spouse?  Where do you get instructions for knowing that a person is trustworthy?  What is the set of rules that tells you which
career path you should follow?  The things that people think about are very rarely framed as these nice formal problems with nice
formal solutions, and in a case like that, a computer is currently hopeless.  There are very few computers out there that can be
dumped in a situation like that, and have any hope whatsoever.  In fact, a computer has yet
to be invented that can hold an interesting *conversation* for more than a few minutes without revealing that it is a computer.
People figure it out, and we don't - at least, we don't seem to - just choose the best action from a list of rules, we have other
kinds of intelligence.  AI is working on this, but it's hard to figure out where we can even start looking.  That's another example
right there - What are the rules for developing an artificial intelligence?  We don't know, we have to think of things that we were
never taught, we have to combine new ideas, that have never occured in the history of civilization.  We test our theories, most of
them turn out wrong.  We try redefining the problem, using neural nets instead of
regular computers, or trying to simulate just one part of AI, instead of doing everything.  But we just don't know.  If we want to
build a true AI, it has to deal with that the same way we do, to be truly intelligent, it will have to be able to attack these
problems - to try to solve problems where it has NO information to start from, where there is no list of what's important.  There
isn't any computer today that even approaches that.

Hope it helps,

Johnathan

PS - On the emotions issue, some researchers are working on it, there's a team at MIT that's doing some of the big work, but it
isn't currently considered central to AI research.


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Johnathan Nightingale                U of T - Cognitive Science &
johnath@psych.utoronto.ca                 Artificial Intelligence

  "Display some adaptability."     - Doug Shaftoe, Cryptonomicon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~