.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Monday, February 25, 2008

political affiliation

while adjusting some privacy settings on facebook, I noticed the profile spots for Political Views and Regilious Views. political is a drop box with pre-specified options, while religious is an open-response text box. this alone is cause for interest. are political views more pidgeonhole-able or easily classifiable? are the facebook creators more worried about offending someone by excluding their religion than their politics? then I saw the specific options listed for Political Views:

Very Liberal
Liberal
Moderate
Conservative
Very Conservative
Apathetic
Libertarian
Other

wow. I really had to read it twice. the only one that's actually a part is Libertarian (I don't think there's an Apathetic party, as I suspect it might be a little oxymoronic).

however, this is troubling at a deeper level. what if I'm morally conservative (e.g. against gay-marriage and abortion), but I believe in expansive governmental social programs like welfare and public health care? is that moderate? to me, it sounds both very conservative and very liberal, just in different ways. this sort of polarization is really neither helpful nor healthy, but, I think, is quite damaging. it seems like just the sort of thing against which John Stewart railed on Crossfire several years ago. trying to put political views on a single axis like this not only makes it difficult for me to express my identity (something that facebook should help me do), but it really reinforces this obfuscation of the complexities of our current political situation.

...or you could just be libertarian.

Labels: , ,

Friday, February 22, 2008

evaluation, interpretability, and the utility of dreaming

a few nights ago, my girlfriend and I watched The Science of Sleep. for those not familiar, the basic premise revolves around a main character who has difficulty discerning between his waking life and his dreams. afterward, we got into (what I felt was) a somewhat confused conversation about whether or not it was a good movie, why it was a good movie, and how you know it's a good movie (and I fear that conversation has led indirectly to what is an almost equally confused blog post). one of the pseudo-conclusions to which we came is that it's a good movie because of its interpretability. that is, there are several parts of the movie, foremost the ending but also bits and pieces throughout, where it was, we think, left intentionally unclear what exactly happened. the point is not to figure out the "true" or "real" story at those points. rather, the genius of the movie seemed its ability to engage the audience in interpreting those somewhat ambiguous parts. my mind kept slipping towards questions of evaluation; how do we know it's a good movie? when the key aspect of the movie has nothing to do with the movie objectively and everything to do with the interactions between movie and viewer, how can one really say anything about the movie itself?

evaluation is a huge buzz word in HCI. "ok, cool, you built your system, but does it work? does it achieve the intended goal?" I've heard it described that part of a dissertation is scoping out a problem, picking a portion of that problem, describing the win condition wherein you know that the problem has been solved, and, crucially, demonstrating that the win condition has been achieved. even when we recognize that evaluation is an interactive process, that it's really about the meeting of system and user, evaluation so often boils down to a question of success. does the system achieve the goals it set out to accomplish? there are certainly lots of conversations going on right now about richer, fuller means of evaluation, focusing less on system evaluation and more on experience evaluation, and emphasizing that evaluation is a process of determining value, which is necessarily contextually (historically, culturally, etc.) contingent. personally, I find a lot of this work both particularly compelling and very liberating, especially with respect to the epistemological questions it raises; what do we as a field consider valid knowledge, and how do we validate methods of knowledge production? on the other hand (maybe it's just that I'm having a hard time shedding my positivist roots), I have a desire to know, does it work?

this desire becomes inherently problematic when the ostensible goal of a system is to support, facilitate, encourage, and even engender critical thinking and reflection, especially when that reflection hinges on the interpretability of the system. here, I refer to interpretability not as a question of "do participants interpret this system properly?" rather, the question I want to ask is, "to what extent does the system present a resource for interpretation?" it's difficult enough to ascertain whether or not interactors are engaging in these abstract process--critical thinking, reflection, interpretation--to begin with. now, try to determine to what extent the interactor's thoughts, feelings, behavior are a result of interacting with the system. the very notion seems misguided; we're not dealing with a system cause-and-effect relationship here, but rather a whole complex system in which I doubt any single aspect can be causally linked to any other. besides, this isn't about controlling for confounding factors. it's about getting people to think, to critically engage, and to question, reconsider, and possibly even reformulate their conceptual frame.

I think one of the difficulties in my case is that the system I'm developing has a goal that seems objectively evaluable. does it do what I say it does? am I able to automatically identify conceptual metaphors (a la Lakoff and Johnson) in bodies of written text? well, I think the question of whether or not it works, or how well it works, depends largely upon the interactor's (i.e., "user's") interpretation of the system's results. moreover, I think it hinges on the interpretability of those results. the question, I suspect, should not be, "does the system accurately and correctly identify conceptual metaphors?" rather, the question should be, "does the system produce results that serve as a resource for the interactor's interpretation, and through that interpretive process does the interactor engage in critical thinking and reflection?" not that this is a particularly easy question to answer, but it seems a somewhat more useful one in terms of evaluating, i.e., determining the value, of the system. it's not about measuring success, it's about understanding the interactors' experience with the system.

this ended up getting too long for a single post, so I'll end with the above thoughts about evaluating interpretability and reflection. more stuff about dreaming to follow...

Labels: , , ,

Wednesday, February 06, 2008

iSchool dissertations and rigor

I was following this discussion at UW's iSchool, and the previous week's, on what constitutes an information science dissertation, and I found the (posted) results of this panel rather interesting. they list a series of pragmatic suggestions from students, the first and most important being to "satisfy yourself first," followed by a list of expectations from professors. One striking expectation from the professors is that the dissertation be "rigorous." I suspect it is no coincidence that this expectation is followed by being able to justify a qualitative dissertation to a quantitative researcher and vice versa. While we can all agree that research should be rigorous, what actually constitutes rigor seems to vary, at times greatly, and not just along quant/qual lines. The faculty in the department at my school seem similar to UW: folks from different disciplines coming together due to common interests. This leads to different definitions of what counts as rigorous, sort of a panoply of perspectives from which to choose the best fit for your particular problem.

I'm wondering, though, as iSchools begin to graduate students, what will the field of information science consider rigorous? will it maintain this panoply approach? will certain approaches get canonized and others become discredited? will we generate new approaches distinct to iSchool-type problems? what are the potential ramifications if these methods get picked up and transfered to other fields? when and how might such new methods be considered rigorous, both in info sci and in other disciplines?

Labels: , , , ,