.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Thursday, August 04, 2005

grand vision blinded

Grand Vision Blinded

About a month ago, AAAI held it’s annual national conference in Pittsburgh, PA, which I had the good fortune of attending. While some of the talks were absolutely captivating, others were mind-numbingly boring. It’s entirely possible that I don’t have enough expertise in the field of AI as a whole to have appreciated some of the talks, but I often found myself questioning the material presented. Why is this here?, I thought. Is this AI? What is the purpose of AI? What is AI?


This set of questions was a recurrent theme throughout the conference, as indicated in part by several of the posts to the AAAI-05 student blog. Minksy’s keynote, as one would expect, stated that the field of AI as a whole was headed in the wrong direction and, by continuing in that direction, would never achieve AI’s grand vision of an intelligent machine. True to form, he presented his own plan for how to achieve said vision. From what I understood, (which wasn’t very much, since I was also working as an SV during the talk), his design involved some sort of multi-layer architecture with the lowest level tied closely to the physical world and the highest layer being a total abstraction. I think some of my confusion may have also arisen from the fact that such an expansive plan could never be condensed into a one hour keynote, and indeed he skipped a good many of his slides, saying that they would be posted on his website.


Minsky, however, was not the only one presenting a new path that would supposedly bring an AI renaissance. Jeff Hawkins presented some work taken largely from his recent book On Intelligence, which focused on using a new theory of neocortex to end the ‘AI winter.’ The talk also included some details for an implementation based on the theory. Despite not having time for a demo, the system seemed quite adept at symbol recognition, even with distorted or missing information. Hawkins was quick to acknowledge limitations of the system, including the fact that it deals only with abstractions from sensory input and thus is not linked to any sort of embodiment. The key is that abstractions learned from one type of input can carry to others, in a way very similar to that in which the human brain uses the same architectural patterns in different parts of the brain. However, despite the system’s adeptness, it seems to me just like a new approach to neural networks. Not that the concept of neural networks is flawed, but it doesn’t seem that a new approach to neural networks will bring us any closer to an intelligent machine.


While at the conference, I also participated in a workshop on the modular construction of human-like intelligence, organized by Krissten Thórisson, Hannes Vilhjalmsson, and Stacy Marcella. The point of the workshop, as the name would imply, was to share projects, method, and ideas about developing intelligent, human-like behavior via employing a variety of technologies that were developed independently, such as vision, speech recognition, decision making, etc. Also full of some interesting ideas, but most of the work presented seemed less interested in strong AI and more in creating human-like behavior for specific task domains.


By no means is each of these various lines of research unworthy of in-depth study. For example, vision algorithms that recognize hostile activity in a crowd of people could be invaluable to law enforcement officials and public safety in general. The question at hand is whether these pursuits fall under the classification of AI. More to the point, what is AI? Where do we set the demarcations? Is path finding AI? Are vision algorithms AI? Game playing? Knowledge representation? Multiagent systems? Robotics? Machine learning? Natural language understanding? Probabilistic reasoning? What happened to AI’s Grand Vision of creating truly intelligent machines?


I believe that the grand vision has become blinded, in part to the progress and successes in these various subfields in AI. Just because some group is working on a problem that does not necessarily correlate exactly to human-like intelligent behavior does not mean that they are not researching AI. Indeed, the very nature of artificial intelligence is that while intelligent, it is artificial and thus not an exactly facsimile of the human behavior by which we define intelligence. The grand vision of AI is concerned not with artificial intelligence, I think, but with intelligence. Here, too, AI’s vision of intelligent machines has become blinded, not to the progress made toward that goal, but the purpose of that goal.


Why do we seek to create intelligent machines? Some cite utilitarian purposes, but it’s questionable whether a menial laborer, which is generally the envisioned task of artificial intelligences, needs to be intelligent, or whether it could serve its function perfectly well being artificially intelligent. When these intelligent machines move into the home, a more sophisticated, delicate, and adaptive intelligence is necessary, but I’m still not convinced that this requires anything more than artificial intelligence.


Another possible motivation for creating intelligent machines arises when we take a biological perspective, namely that we have an undeniable urge to create, which gives rise to the behavior of generating offspring to continue the species. That same creative urge may drive us to create intelligent machines as another sort of progeny, but this leads to all sorts of ethical and religious arguments, which, while quite fascinating, are also quite beside the point at hand.


While there may be many further motivations for work in AI, that of using AI as a reflective study on humans is particularly compelling for a number of reasons. For example, in the field of artificial life, computational systems have been built using some rule set theorized to govern the behavior of a natural system. In some cases, the artificial system has then reproduced the same behavior as the natural system, and thus acting to support that particular theory about the rules behind the natural system’s behavior. However, similar reasoning may not scale to AI, partly because these experiments only give an indication about how the natural system might work, not an answer as to how it does work, and partly because it is based on the supposition that the natural system does in fact follow a specific set of codifiable rules. Whether or not human behavior is governed by such a rule set is more than just an open question, but a debate in which philosophers, psychologists, cognitive scientists, and many others have engaged for ages.


The realization of AI’s grand vision could serve as an answer to this question. However, two problems complicate this reasoning. For one, success in creating an intelligent machine is an answer to the question, but failure is not, just as in the halting problem, where we do not know if the answer is ‘no’ or ‘not yet’. Second, while definitive success would be an answer, how could we determine that definitive success had been achieved? Searle’s Chinese room argument seems to severely diminish if not defeat the validity of the Turing test, so we cannot just assume that human-like behavior belies human-like intelligence. To my knowledge, there does not exist a test that could demonstrate the vision’s realization. Thus, we can know neither when we have succeeded nor whether we have failed to create an intelligent machine.


I posit that AI’s grand vision has been blinded. In thinking that creating an intelligent machine is the only worthy task for AI, it has been blinded to the myriad innovations and accomplishments that also fall under the field of AI. In pursuit of its goal, it has been blinded to the purpose of achieving the goal. By no means do I think that achievement of the grand vision of intelligent machines is an unworthy pursuit. Rather, I seek both to find a reason why an intelligent machine is truly necessary over an artificially intelligent one and to encourage within the field of AI an acceptance of each other’s methodologies in achieving our own visions.

3 Comments:

  • A few random comments ...

    Two competing views of the purpose of AI--producing intelligent (or intelligent-appearing) systems and understanding human intelligence--are, I think, almost as old as the field.

    I wonder, did you hear Ron Brachman's presidential address? A key point was the necessity of combining the results of AI subfields to create true/general AI systems. Specifically, he called for more research on "integrated intelligent systems," which do just that. He offered two examples of such systems: ASE/EO-1 and CALO/RADAR. (If you look at the CALO website you'll see that scads of groups are collaborating.) He also suggested the AAAI national conferences as an ideal venue for practitioners in the subfields to communicate. Thoughts?

    Finally, I THINK one big difference between Hawkins's approach and more conventional neural nets is the much greater role feedback plays in his model. I still have to read his book/paper though.

    By Anonymous Anonymous, at Wednesday, August 10, 2005 8:52:00 AM  

  • First off, thanks for the well-thought out comments.

    Yes, I saw the the presidential address and quite liked it. I thought it was almost a sort of antithesis to Minsky's keynote. Whereas Minksy said, AI is going in the wrong direction, it should be going this way, Brachman said, AI is going in the right direction, and here's functioning proof, an "integrated intelligent system."

    The comment on Hawkins is well taken. The impression I got from his talk is that not only feedback but abstraction plays a large part in his architecture in a way that they don't in others. Particularly in the way that patterns learned in one part of the net can be applied in the other. But I have not read his book, either, so I cannot say for certain.

    My main point with this, and it may be more of a philosophical question, is should we be concerned with reproducing human-like intelligence? Obviously, CALO/RADAR is a wonderful example of techniques from throughout the field of AI integrated into a functioning and useful whole. What I'm talking about, though, is more reproducing humans in machines. Emotions, abitions, etc. Don't even bother with the question of whether or not it's feasible. Is it necessary? Is it useful? What would be the purpose of machines that were intelligent in the same way that humans are? I feel like this human-like intelligance for machines is a main goal of researchers along the lines of those such as Minsky, but I'm just trying to understand why recreating what humans do is so necessary. Why can't system like those mentioned above be deamed artificially intelligent and that suffice? Obviously, on the technical side, there are plenty of improvements to made to voice recognition, vision, etc., but do we really need a revolution?

    By Blogger Jystar, at Friday, August 26, 2005 8:50:00 PM  

  • I'm the original commenter--I've been occupied with other things for a while.

    I personally would like to see both approaches to AI--producing intelligent-seeming systems and understanding human intelligence--thrive. The first might be characterized as an engineering discipline (building systems), while the second as a scientific discipline (understanding systems). Advances in the second can contribute to advances in the first, and vice versa.

    I can conceive of a competent artificial intelligence built on principles differing from those of natural intelligence. Even if this were accomplished, I'd still like to understand how the natural version works, simply because of the potential contribution to science.

    By Anonymous Anonymous, at Wednesday, September 14, 2005 10:27:00 AM  

Post a Comment

<< Home