.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Thursday, August 04, 2005

grand vision blinded

Grand Vision Blinded

About a month ago, AAAI held it’s annual national conference in Pittsburgh, PA, which I had the good fortune of attending. While some of the talks were absolutely captivating, others were mind-numbingly boring. It’s entirely possible that I don’t have enough expertise in the field of AI as a whole to have appreciated some of the talks, but I often found myself questioning the material presented. Why is this here?, I thought. Is this AI? What is the purpose of AI? What is AI?


This set of questions was a recurrent theme throughout the conference, as indicated in part by several of the posts to the AAAI-05 student blog. Minksy’s keynote, as one would expect, stated that the field of AI as a whole was headed in the wrong direction and, by continuing in that direction, would never achieve AI’s grand vision of an intelligent machine. True to form, he presented his own plan for how to achieve said vision. From what I understood, (which wasn’t very much, since I was also working as an SV during the talk), his design involved some sort of multi-layer architecture with the lowest level tied closely to the physical world and the highest layer being a total abstraction. I think some of my confusion may have also arisen from the fact that such an expansive plan could never be condensed into a one hour keynote, and indeed he skipped a good many of his slides, saying that they would be posted on his website.


Minsky, however, was not the only one presenting a new path that would supposedly bring an AI renaissance. Jeff Hawkins presented some work taken largely from his recent book On Intelligence, which focused on using a new theory of neocortex to end the ‘AI winter.’ The talk also included some details for an implementation based on the theory. Despite not having time for a demo, the system seemed quite adept at symbol recognition, even with distorted or missing information. Hawkins was quick to acknowledge limitations of the system, including the fact that it deals only with abstractions from sensory input and thus is not linked to any sort of embodiment. The key is that abstractions learned from one type of input can carry to others, in a way very similar to that in which the human brain uses the same architectural patterns in different parts of the brain. However, despite the system’s adeptness, it seems to me just like a new approach to neural networks. Not that the concept of neural networks is flawed, but it doesn’t seem that a new approach to neural networks will bring us any closer to an intelligent machine.


While at the conference, I also participated in a workshop on the modular construction of human-like intelligence, organized by Krissten Thórisson, Hannes Vilhjalmsson, and Stacy Marcella. The point of the workshop, as the name would imply, was to share projects, method, and ideas about developing intelligent, human-like behavior via employing a variety of technologies that were developed independently, such as vision, speech recognition, decision making, etc. Also full of some interesting ideas, but most of the work presented seemed less interested in strong AI and more in creating human-like behavior for specific task domains.


By no means is each of these various lines of research unworthy of in-depth study. For example, vision algorithms that recognize hostile activity in a crowd of people could be invaluable to law enforcement officials and public safety in general. The question at hand is whether these pursuits fall under the classification of AI. More to the point, what is AI? Where do we set the demarcations? Is path finding AI? Are vision algorithms AI? Game playing? Knowledge representation? Multiagent systems? Robotics? Machine learning? Natural language understanding? Probabilistic reasoning? What happened to AI’s Grand Vision of creating truly intelligent machines?


I believe that the grand vision has become blinded, in part to the progress and successes in these various subfields in AI. Just because some group is working on a problem that does not necessarily correlate exactly to human-like intelligent behavior does not mean that they are not researching AI. Indeed, the very nature of artificial intelligence is that while intelligent, it is artificial and thus not an exactly facsimile of the human behavior by which we define intelligence. The grand vision of AI is concerned not with artificial intelligence, I think, but with intelligence. Here, too, AI’s vision of intelligent machines has become blinded, not to the progress made toward that goal, but the purpose of that goal.


Why do we seek to create intelligent machines? Some cite utilitarian purposes, but it’s questionable whether a menial laborer, which is generally the envisioned task of artificial intelligences, needs to be intelligent, or whether it could serve its function perfectly well being artificially intelligent. When these intelligent machines move into the home, a more sophisticated, delicate, and adaptive intelligence is necessary, but I’m still not convinced that this requires anything more than artificial intelligence.


Another possible motivation for creating intelligent machines arises when we take a biological perspective, namely that we have an undeniable urge to create, which gives rise to the behavior of generating offspring to continue the species. That same creative urge may drive us to create intelligent machines as another sort of progeny, but this leads to all sorts of ethical and religious arguments, which, while quite fascinating, are also quite beside the point at hand.


While there may be many further motivations for work in AI, that of using AI as a reflective study on humans is particularly compelling for a number of reasons. For example, in the field of artificial life, computational systems have been built using some rule set theorized to govern the behavior of a natural system. In some cases, the artificial system has then reproduced the same behavior as the natural system, and thus acting to support that particular theory about the rules behind the natural system’s behavior. However, similar reasoning may not scale to AI, partly because these experiments only give an indication about how the natural system might work, not an answer as to how it does work, and partly because it is based on the supposition that the natural system does in fact follow a specific set of codifiable rules. Whether or not human behavior is governed by such a rule set is more than just an open question, but a debate in which philosophers, psychologists, cognitive scientists, and many others have engaged for ages.


The realization of AI’s grand vision could serve as an answer to this question. However, two problems complicate this reasoning. For one, success in creating an intelligent machine is an answer to the question, but failure is not, just as in the halting problem, where we do not know if the answer is ‘no’ or ‘not yet’. Second, while definitive success would be an answer, how could we determine that definitive success had been achieved? Searle’s Chinese room argument seems to severely diminish if not defeat the validity of the Turing test, so we cannot just assume that human-like behavior belies human-like intelligence. To my knowledge, there does not exist a test that could demonstrate the vision’s realization. Thus, we can know neither when we have succeeded nor whether we have failed to create an intelligent machine.


I posit that AI’s grand vision has been blinded. In thinking that creating an intelligent machine is the only worthy task for AI, it has been blinded to the myriad innovations and accomplishments that also fall under the field of AI. In pursuit of its goal, it has been blinded to the purpose of achieving the goal. By no means do I think that achievement of the grand vision of intelligent machines is an unworthy pursuit. Rather, I seek both to find a reason why an intelligent machine is truly necessary over an artificially intelligent one and to encourage within the field of AI an acceptance of each other’s methodologies in achieving our own visions.