.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Saturday, April 16, 2005

cliquey AI

so I'm working on this paper called "Synthetic Social Construction for Autonomous Characters" to be submitted to AAAI workshop on Modular Construction of Humanlike Intelligence. the general idea i'm using is taking the social construction of self and applying to autonomous agents.

if you're not too familiar, social construction of self says that we base our self concept on the actions of others, both the actions they take toward us and the interactions they have with one another. to quote from the draft:
When one is the recipient of another’s actions, one changes one’s conception of one’s self depending on what actions were taken; if I am repeatedly mocked and insulted, I may begin to hold myself in lower regard. When another is the recipient of similar actions, one sees one’s self as similar to that other; if someone else is also repeatedly mocked and insulted, I could consider myself similar to him or her. When such similarities are in place, one takes one’s cues for social action (or inaction) from those that one considers to be similar to one’s self. If the other person who had been repeatedly mocked and insulted became indignant and demanded that others stop treating him or her in that manner (or if the other person remained tacit and bore the attack in silence), I might be inclined to have a similar reaction (or lack of a reaction).
so, agents identify other agents to whom they bear a good deal of similarity and then emulate those other agents' actions. currently, i'm applying the concept to an implementation in the virtual raft project. to allow the user a good visualization of when the characters on the island think that they are similar to one another, I wrote some clustering code, such that the characters will gravitate towards other characters with a high degree of similarity and won't stay in close proximity other characters with a low degree of similarity.

while testing, it occurred to me that it might appear to the average observer that the characters on the islands were forming cliques -- by choosing to "hang out" with other, similar characters, they appeared to be excluding dissimilar characters from their social circles. in reality, the dissimilar characters were "choosing" (I use the term loosely) not to hang out with one another. however, it really looks like they are forming cliques.

so this led me to a line of thought on which i'd like some input. we tend to try and build into AI the things we admire or value in humanity: intelligence, emotion, common sense -- generally, those things that we believe set us apart from other complex entities. what about some of the less pleasant aspects of human behavior, such as irrationality, deception, jealousy, or, in this case, a sort of social elitism? i'm not saying we should or shouldn't pursue implementing such aspects, rather i am asking. perhaps we don't want deceptive AI, as that may lead to the host of fates described as sky lab, the matrix, etc. perhaps we do want deceptive AI, as it might lead to a greater understanding of human deception or, ultimately, help us develop AI that is that much closer to emulating human behavior.

so, I ask, should we recreate the good with the bad and make cliquey AI?

4 Comments:

  • this is a bit tangential, but i have been hearing complaints lately about how utopian ubicomp is. in embedding computers into homes, for example, cs geeks tend to ignore all the nasty power dynamics of the family. the fact that for women, they are most likely to be victims of violent crime in their own home. etc. so great, now we've got an intercom that can reach me anywhere (so my mom can bug me anywhere)... when are you going to come out with tech to warn me when dad is drunk and looking for his belt?

    anyway... you get the idea.

    whether you're in AI or HCI, it is always easier to design for a utopian world. it's like at the beginning of anna karenina (and i paraphrase here because i'm too lazy to fetch my copy): all good marriages are the same. every bad marriage is fucked up in its own unique way. It's way easier for designers not to bother thinking about all the bazillion possible ways the environment around their system might be fucked up. Or in your case, to think of every little fucked up thing you might model. But I think it's necessary, anyway... if you really want your tech (and this sounds so cheesy) to improve the world.

    -amanda

    By Anonymous Anonymous, at Sunday, April 17, 2005 10:56:00 PM  

  • I want AI that, after I walk by the webcam wearing clothes that went out of style last year, sends pictures with derisive comments to the other AIs on the network. I want AI that hacks into my email account and deletes anything from anyone it considers "out of my league". I want AI that, snickering to itself, automatically opens help every time I start a program, and when I close the help window pops up a message "I just thought you might need it."

    By Anonymous Anonymous, at Tuesday, April 19, 2005 7:23:00 PM  

  • so this is some interesting stuff. amanda's post raises the good point that we can't account for all the weird things that people will do, and likewise we can't account for all the weird (or interesting) behaviors that might emerge in AI. it seems that we shouldn't forgo AI just because it might do bad stuff.

    however, in discussions with others, a few people have said that we should give AI some way to determine that what it's doing is or might be bad. in essence, to give it behavior similar to that of people but also give it a conscious (possibly a better one than those of people). as one person intimated, just as parents teach their children morals, so too should we teach morals to any synthetic beings we might create. (not sure how much I like that analogy, seeing that I don't feel like the "father" of the AI I'm programming)

    anyway, that sort of brings us to Alex's point. I'm sure the good majority of it was in jest, but I can't help but fixate on the line about the AI "snickering to itself." not only is it an incredibly compelling (and rather humorous) image, but it gets at a core issue. what does it actually mean to snicker to oneself? can such a notion be codified? if it can be codified, can it be simulated? furthermore, why would you want such an AI? what practical purpose would it serve? must AI serve a practical purpose? what purpose, practical or otherwise, would it serve? must AI serve any sort of purpose?

    I realize at this point that I'm degenerating into navel gazing, but I think the bit about the purpose of AI is a very important one for researchers to consider and address. further comments?

    By Blogger Jystar, at Thursday, April 28, 2005 11:09:00 PM  

  • Well, in my view everything is perception so if I percieved the islanders were forming cliques, they would be forming cliques. If we're to ever get the percieved behavior of anything humanlike in AI such as intelligence or hatred or whatever it'll never happen if something is explicitly programmed. Everything must be emergent in some way. What it actually means to snicker to oneself is to put that perception into the observer. The notion itself cannot be codified (well it can, but only superficially... which would be fine but that'd leave an infinite number of other notions to program in). AI to me is an attempt at understanding oursleves enough that we can re-create it. It's finding the universal machine for humans.

    Navel gazing is fun, even if extremely vague.

    -Greg-

    By Anonymous Anonymous, at Friday, April 29, 2005 2:28:00 PM  

Post a Comment

<< Home