.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Sunday, March 20, 2005

jazzbots and more

the crux of this post is a question, to and about which most of this post is an introduction and initial discussion. I'm looking for feedback, so if you just want to see the question, scroll to the bottom, though that would certainly, at least to some extent, defeat the purpose of the rest of the post.

on friday night, I saw Kei Akagi, a piano professor at UCI, perform with LEMUR, the league of electronic musical urban robots. basically, Kei performed impromptu on the piano, quite well I might add, and the robots improvised along with him. the actually performance was rather interesting; there were specific times that one would hear Kei introduce a theme and then hear one of the robots pick up that theme. they would then go back and forth, developing the theme. there were even times that the robots would introduce a theme and Kei would pick it up and run with it. however, these moments were a minority, most of the time, since the robots were only percussion instruments, one could hear a rather loose connection between Kei's piano and the robots' percussion if one listened closely, but it often came across as more or less just some noise. this may be due to the fact that I am not well-versed in the 20th-century style used in the performance, but after talking to some others, I think many had similar reactions to mine.

however, a very interesting comment was made during an introduction to the performance by Chris Dobrian, a professor of music composition, theory, and technology, at UCI. describing the way the installation worked, he spoke about the way that a human musician improvises; he or she picks up themes, modifies them, pulls from a large pre-built repertoire, etc. he then said that the goal was for the robots to do the same things that human improvising musicians do, but not necessarily in the same way that humans do them.

this got me thinking about different philosophies and approaches to AI. some of the classical AI researchers, like McCarthy, take the approach of trying to give a machine formal, abstract, general higher-reasoning abilities through various methods, such as first order logic, situation calculus, etc. others, like cognitive computational neuroscientist Richard Granger, take the approach of building machines that resemble the circuitry of the brain at a very low level (that of neurons) in order to simulate the basic functions of the brain. both of these approaches, and others, have had varying degrees of success in producing human-like/intelligent behavior (an interesting side conversation that I might like to pursue is the difference between these two). however, they both attempt to simulate humans by emulating them; that is, these approaches look at how humans do what we call being intelligent and try to get machines to be intelligent by the same means.

Chris Dobrian's comment seems to suggest another approach. rather than trying to emulate the same process that gives rise to intelligence in humans, we might try to produce the same effects but via a different process. much of the research in artificial life seems to suggest this might be possible (I can't think of a good reference at the moment - any suggestions would be appreciated). a-life simulations can often produce the same effects as their wet-life counterparts, but it is not necessarily true that the two systems go about producing these effects in similar ways. might we be able to produce intelligence by some means not related to that used by humans. I will admit that, so far, such an approach does not appear promising, but that does not necessarily mean that it is entirely the wrong approach.

ultimately, I suppose it doesn't matter how one produces intelligent behavior. if we use something like the Turing test, the machine is treated essentially as a black box into which questions are fed and from which answers come; the method by which those answers are reached is not apparent, just as we cannot peer into one another's heads to see exactly how the other is thinking. however, there have been some serious challenges to the Turing test, including John Searle's Chinese room experiment (with which I am admittedly not intimately familiar). that said, before one even considers the Turing test, one must construct a machine that attempts to pass it, and before doing that one must consider how to go about constructing such a machine, which comes back to our present conversation.

so, I submit the question, what is the best approach to constructing artificial intelligence, by emulating the processes we think give rise to intelligence in humans or by creating a system that produces behavior we would call intelligent regardless of how it produces that behavior?

1 Comments:

  • Okay... so intelligence versus something that appears intelligent? Wait... I can quote...
    "what is the best approach to constructing artificial intelligence, by emulating the processes we think give rise to intelligence in humans or by creating a system that produces behavior we would call intelligent regardless of how it produces that behavior?"
    I would say a mix. ...I would think that intelligent behavior should not be limited to the ...created by a system similar to organic-beings aspect. However, the illusion of intelligence is still but an illusion. Yet, I think it would... well, what would artificial inteligence be? A non-living being knowing inputted facts? No. I would say that it would have to be able to adapt situationally without being told, 'this is the situation. What do you do?' ...but that this robot or whatever could engineer something that is apparently altogether new. However, the system of creating this characteristic does not need to be limited by resembling human-like stuff. Hehe... reducing to Allan-speak, ka?
    Hope that seemed relevant and thought-provoking.
    -SZ

    By Anonymous Anonymous, at Wednesday, April 13, 2005 7:10:00 PM  

Post a Comment

<< Home