.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Sunday, May 06, 2007

CHI - experience evaluation SIG

last week was CHI, and there are about a bajillion things that came up that I really want to blog about. the first one that's actually made it out of my head and onto my keyboard was about the SIG I went to on evaluating experience-based HCI, organized by Joseph 'Jofish' Kaye, Kirsten Boehner, Jarmo Laaksolahti, and Anna Ståhl. due to delayed Caltrains, I didn't get there until a half hour into the SIG, so I missed pretty much all of the introductions. however, it was a nenjoyable, exciting thing of which to be a part, and I feel like there was some truly useful discussion. the participants at the SIG broke up into 3 small groups, each of which chose to address one particular question. my group (including, among others, Michael Muller, Janet Vertesi, Ryan Aipperspach, and Sara Ljungblad) took on, "What are good criteria for an evaluation of experience-focused HCI?" essentially, this is a question of meta-evaluation, that is, how do we evaluate our methods for evaluating experience-focused HCI. the first bit below is a number of criteria that our group thought would be good for evaluating evaluation methods. below that is just a transcription of my notes from the SIG. most of these are from little comments jotted in the margins of the sheet of paper they gave us, so they come in no particular order. the regular type face are my actual notes, and the italics are comments I added after the fact.

What are good criteria for an effective evaluation method?

- Does it highlight the role of the researcher? of the observed?

- Does it elicit rich stories? think descriptions? (I think there are interesting problems with elicitation here that tie back into the first point)

- Does it help to construct a faithful analysis or account or report?

- Does it inspire users/designers/researchers/companies?

- Does it open up interpretations? (closing out isn’t necessarily bad)

- Does it make sure not to generate graphs?

historicizing objectivity – Daston and Galison (Representations, 40, 81-128) describe how objectivity means different things at different times

standpoint epistemology and The Voice from Nowhere

experiences in the moment vs after the moment. how to get people back into the moment after the fact? can use reflective visualizations. rather people to discuss around an artifact

highlight the reflexive nature of technology (is/can technology itself be reflexive? reflective?)

challenges and opportunities for subjectivity

expose subjectivity

the process of an experience vs the product of an experience (I suspect this might be a distinction between having an experience and the memory of the experience. hm, is the act of remembering an experience itself, possibly quite different and distinct from the original experience being remembered? in experience-focused HCI/design, perhaps we could/should support not only having experiences but also the experience of remembering those experiences)

evaluating experiences as storytelling, plumbing a collection of episodic memories (the notion of stories and narratives came up a lot in our small group)

elicit multiple, different stories. different experiences for different users. retain multiple persectives. (emic perspective, multiple realities. there’s almost certainly a connection here to something a whole lot bigger about multiple thought styles, epistemological pluralism, different cognitive framings, etc.)

literary theory often evaluates texts over and over. why do we not return repeatedly to the same UX to understand it more fully or in different ways? (perhaps this connects to the above point, in that HCI doesn’t particularly value having lots of different perspectives, so studying the same thing again is seen as a waste of time)

often talk about the “representative” user. how is a particular user representative? statistically? politically (e.g., elected union representative)? who chooses the representative and how?

different criteria to evaluate different evaluation methods for different experiences

evaluation as developing a sensibility rather than determining progress – “the tyranny of progress” (a distinction that came up in the HCI and New Media workshop in which I participated was that of evaluation vs analysis, that evaluation might be more along the lines of judging something as good or bad, while analysis is more about understand something. those perhaps might not be the right terms, but I suspect it might be an important distinction to make)

process of presenting results of experience evaluation: exposure -> awareness -> empathy -> advocacy -> change

a member of another group said “when we’re in flow, we’re having an experience,” that flow can be one indicator of an experience (this raises the interesting notion that we might sometimes be having an experience and might at other times not. I’m not sure how much I’d agree with this notion that we might at some times not be having an experience, but I’d certainly agree that we have different types of experiences at different times, and that when trying to evaluate experience you might only be interested in certain types of experiences.)



I'd love to hear others' thoughts/comments/questions about this stuff. I feel like it's an important direction for HCI to pursue, and I think that having discussions about how to pursue it most effectively is an important aspect of making experience-based evaluation more accepted and central to the HCI community.

Labels: , , , ,

0 Comments:

Post a Comment

<< Home