.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Tuesday, April 26, 2005

(ultimate) frisbee - why so ultimate?

so I came across this article that apparently ran in the Tufts University news paper trying to denigrate ultimate as not being a real sport. most of the arguments are pretty much crap (ad hominem and the like), but it worries me that this is not a lone voice in the wilderness. perhaps I'm currently over-sensitive to anti-frisbee sentiments since UCI's club recently got suspended (the details of which I won't go into here). however, while we were still practicing on the school's official recreational fields we got a lot of flack, and I'm not sure why. it's entirely possible that this negative attitude towards frisbee is quite pervasive, although to my way of thinking entirely unfounded. of the people I've met personally, none have had this kind of disgust for ultimate and usually are interested in trying it. others have similar experience?

although the article's scathing rebuke is rather unwarranted, it does bring up one interesting question. why is it called "ultimate" frisbee? I mean, really, it's a ton of fun, but I don't think it's any more "ultimate" than any other sport out there. it's entirely possible that the moniker was designed to differentiate ultimate from other flying disc games, such as frisbee golf (also called "frolf"), the discus event in track, or freestyle frisbee. but why ultimate? especially when the game as, as noted in the article, as reputation based somewhat on history of being played by hippies?

anyone who knows why it's "ultimate," I'd love to find out.

Saturday, April 16, 2005

cliquey AI

so I'm working on this paper called "Synthetic Social Construction for Autonomous Characters" to be submitted to AAAI workshop on Modular Construction of Humanlike Intelligence. the general idea i'm using is taking the social construction of self and applying to autonomous agents.

if you're not too familiar, social construction of self says that we base our self concept on the actions of others, both the actions they take toward us and the interactions they have with one another. to quote from the draft:
When one is the recipient of another’s actions, one changes one’s conception of one’s self depending on what actions were taken; if I am repeatedly mocked and insulted, I may begin to hold myself in lower regard. When another is the recipient of similar actions, one sees one’s self as similar to that other; if someone else is also repeatedly mocked and insulted, I could consider myself similar to him or her. When such similarities are in place, one takes one’s cues for social action (or inaction) from those that one considers to be similar to one’s self. If the other person who had been repeatedly mocked and insulted became indignant and demanded that others stop treating him or her in that manner (or if the other person remained tacit and bore the attack in silence), I might be inclined to have a similar reaction (or lack of a reaction).
so, agents identify other agents to whom they bear a good deal of similarity and then emulate those other agents' actions. currently, i'm applying the concept to an implementation in the virtual raft project. to allow the user a good visualization of when the characters on the island think that they are similar to one another, I wrote some clustering code, such that the characters will gravitate towards other characters with a high degree of similarity and won't stay in close proximity other characters with a low degree of similarity.

while testing, it occurred to me that it might appear to the average observer that the characters on the islands were forming cliques -- by choosing to "hang out" with other, similar characters, they appeared to be excluding dissimilar characters from their social circles. in reality, the dissimilar characters were "choosing" (I use the term loosely) not to hang out with one another. however, it really looks like they are forming cliques.

so this led me to a line of thought on which i'd like some input. we tend to try and build into AI the things we admire or value in humanity: intelligence, emotion, common sense -- generally, those things that we believe set us apart from other complex entities. what about some of the less pleasant aspects of human behavior, such as irrationality, deception, jealousy, or, in this case, a sort of social elitism? i'm not saying we should or shouldn't pursue implementing such aspects, rather i am asking. perhaps we don't want deceptive AI, as that may lead to the host of fates described as sky lab, the matrix, etc. perhaps we do want deceptive AI, as it might lead to a greater understanding of human deception or, ultimately, help us develop AI that is that much closer to emulating human behavior.

so, I ask, should we recreate the good with the bad and make cliquey AI?

Friday, April 08, 2005


ok, this is pretty sweet.


especially after the engaging the city workshop at CHI, which, btw, rocked, as did all of CHI. present, run demo, and attend talks all day, eat, drink, and party all night. not a bad way to spend the first week of classes.