Strong AI stalls?

by Forrest Sheng Bao http://fsbao.net

(I am very foolish and naive. So there will be tons of errors in this article. Welcome to figure them out to me.)

Last Xmas(2007), my sister Christina Zhang came from Canada to see me (coz Texas is warmer). She asked me what Artificial Intelligence is. It's fairly hard to explain AI to a high school student. But I think at least I don't understand AI well enough to explain to a kid instantly.

Today I got my textbook "Elements of the theory of computation" whose cover is a famous (I prefer to use the word I created, ZB) guy, Alan Turing. So I went to Wikipedia again and read his introduction which has been read by me many times. I surfed from one link to another and found a paper.

In 1980, a guy in UC Berkeley, John Searle, published a paper titled:
"Minds, Brains and Programs" on the journal "Behavioral and Brain Sciences"
http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html

He defined two terms, "strong AI" and "weak/cautious AI".

To "weak AI", he said: "According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion."

To "strong AI", he said: "But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."

In my words, automated reasoning and problem solving are among the fields of "strong AI". Looks like the works of John McCarthy decades ago on puzzle solving. "weak AI" considers that it is not possible to build a precise human mind but an optimized/approximated one. It is more popular now and rapidly developing, such as pattern recognition, computer vision, etc.

I saw a review, stating that strong AI stalls now (or develops very slowly) while weak AI has many widely credited applicable results. Yeah, it's true. Lots of laptops nowadays are equipped by a finger print recognition system.

I was shocked. This matches my feeling when I first joined KRLab(Knowledge Representation Lab). But the problem is, the requirements to planning and scheduling from industries are growing, such as airports or NASA. So it is funny. The world needs faster reasoning agents while we can't build them. For instance, it takes 30 years for John McCarthy to figure out the idea of "default." (According to Dr. Gelfond's slides of "Intelligent System"). It also took many years for us to distinguish the concepts default negation and true negation (Gelfond & Lifschitz).

You know LISP? You know Prolog? A long time ago... Of course, 50 years are not long to the history, but long enough to Computer Science.

Why? Do we need a new theory of computation? Do we need the next Alan Turing?

Now the basis of strong AI is from logic, precisely, the non-monotonic logic. The famous language Prolog stands for "Programming in Logic." That's why someone considered that the foundation of AI was formed when Aristotle concluded how human beings think on mountains of Greece. What I wonder is, whether this idea is correct.

Besides, strong AI costs much time on knowledge representation. But this is a tough work. It's hard to represent a container, a closed container, a container with a lip.

Can someone give me an answer, why strong AI stalls? What is the crack to the next revolution?

I need to figure this out before choosing the title of my Ph.D. dissertation.

No comments: