User:Bill Flanagan/docs/Why AI Failed

From OpenWetWare
Jump to navigationJump to search

Wired Article

Why AI Failed -- The Past 10 Years

Push Singh 9 June 1996

(and a comment by Bill Gates)

There has been some progress in AI in the past decade, but not much. I see five reasons for this:

  1. The field has shattered into subfields populated by researchers with different goals and who speak very different technical languages. Much has been learned, and it is time to start integrating what we've learned, but few researchers are widely versed enough to do so. Marvin Minsky has a proposal for how to do so in his seminal work, The Society of Mind, but his framework covers so much territory that few have managed to understand it well enough to implement it.
  2. The field suffers from physics envy. Most AI researchers are looking for simple explanations to inherently complex phenomena. Perhaps the universe can be boiled down to a few simple rules, but it is unlikely that brains can. Consider that roughly half of our DNA is dedicated to directing the growth of the brain and the nervous system. To the extent that we ought to model ourselves after another science, we should copy biology - there are doubtless hundreds of different kinds of mechanisms in the brain, specialized to different sorts of tasks, integrated in an equally complex management structure. The current excitement over neural networks, logical theories, statistical approaches, and genetic algorithms is symptomatic of this puzzling, widespread belief that intelligence can be captured by a simple mechanism.
  3. Many AI researchers have lost touch with the original goal of building a flexible, human-level intelligence. This is partially a consequence of the need to specialize in order to make a name for yourself in the field, and partially a consequence of the funding structure of most institutions that pursue AI research. It is difficult to get funding for risky, large-scale integration projects.
  4. AI is the ultimate software problem, yet many AI researchers spend inordinate amounts of time building and maintaining robots. We should be experimenting with new algorithms, not soldering! When robots are needed, we ought to work in simulated worlds.
  5. AI researchers have been trying unsuccessfully to get around the need for common sense knowledge. To solve the hard problems in AI - natural language understanding, general vision, completely trustworthy speech and handwriting recognition - we need systems with common sense knowledge and flexible ways to use it. The trouble is that building such systems amounts to "solving AI". This notion is difficult to accept, but it seems that we have no choice but to face it head on.

This prompted a brief note from Bill Gates:

   "I think your observations about the AI field are correct. As you are writing papers about your progress I would appreciate being sent copies. I am still extremely interested in AI."

I'm glad someone is!

June 9,1996 Push Singh