.comment-link {margin-left:.6em;}

Sometimes I Wish That It Would Rain Here

Sunday, December 20, 2009

technological extravention

technology can solve any problem. or at least, that’s what plenty of folks seem to believe. lots of work (I’m thinking here largely about research on computational technology, but this line of thinking can also be applied more broadly) describes a problem and then presents some sort of technological intervention intended to solve the problem. every so often, I’ll see a paper describing how, in the process of attempting to solve one problem, the technological intervention actually led to more problems than it solved, and once in a rare while someone will argue that perhaps the best approach would be not to introduce the technology in the first place, but such critical reflection is, in my experience unfortunately rare. I’ve ranted about related notions in the past, particularly with respect to the ways in which companies and NPOs foist technology in places where it might not be wanted or needed, but never with a specific alternative course of action.

I want to suggest such an alternative approach here. rather than studying the impacts of a technological intervention, what if we to conduct a technological extravention? that is, how might our understand technology use, particular the ways in which that technology is interwoven among larger social and cultural constructs, by removing the technology? being a bit of an etymological enthusiast, I’ll admit that the etymology here is a bit off; something like “technological extraction” or perhaps even “technological extradition” might be a bit more accurate, but I think the neologism I’ve used helps emphasize the nature of the critique.

envision conducting a study of preventing a group of people from, for example, using text messaging, or sending email, or reading blogs, or tweeting (I suspect there might be a difference between forcibly preventing people from using some technology and people willfully avoiding its use, but I’d hesitate to speculate what the exact differences might be without further consideration of the specific technology and specific individuals involved). how would people adapt to such situations? how would the renegotiate their various social interactions that are currently mediated via these technologies? at the conclusion of the study, how might people’s long-term patterns of use change? obviously, there would be plenty of logistical challenges to overcome (how would you find people willing to participate in this sort of a study? how would you ensure that participants were complying with your requested non-use? what if the study disrupted the conduct of their work, connection with their families, or some other basic aspect of their lives?), but I suspect the results could be highly informative and worth the difficulties. while an admittedly small step in the direction I’ve suggested, such a study might be a concrete way of suggesting that, in some cases concerning technology, perhaps less is better.

(thanks to various members of the Social Code Group, conversations with whom started my thinking along these lines)

Labels: , , ,

Tuesday, February 24, 2009

unintended consequences

my apartment complex recently implemented a new parking permit system. until last fall, residents received plastic hand tags with an pretty uniquely identifiable iridescent sticker (ostensibly making them difficult to duplicate). as September 2009, they switched over to an electronic system. residents enter their car's license plate number (or those of their visitors, with a limited number of visitors per quarter) online. cars from parking and transportation equipped with cameras and a specially designed computer vision system then drive around the parking lots, automatically issuing tickets for those parked illegally.

nevermind for the moment the surveillance and privacy issues. those are certainly pretty complex, but I feel like they're also some of the more obviously problematic aspects of this technology. what I want to comment on here is a somewhat subtler impact I noticed a week or two ago. it used to be that going to the grocery store, the movies, the dentist, or wherever, one would quite often see hang tags for the graduate student housing complex in which I live. it's not as if I know or am friends with a very large fraction of the hundreds of grad students that live there, but seeing those hang tags created something of a sense of solidarity, of community; it made me feel like I was not alone as a grad student and that, even in this hyper-planned suburban area in which I live, there was a group of people with whom I could identify.

however, since the deployment of this electronic system, no one needs to display hang tags anymore. I didn't even realize that something had been lost until recently when I saw someone who had an old tag up that she had not taken down, which made me realize that I missed the tags. it was interesting, because I'd heard lots of discussion among students and professors about the implications of the new system as related to privacy and surveillance, but I'd not heard anyone else mention the socio-emotional impact of not seeing grad student parking hang tags. I wonder if anyone else has had similar experiences. I find this a particularly provocative example of the development of sociotechnical systems. often times, designers are encouraged to consider the impact their designs may have, beyond just the technical, before deploying them. certainly, one could have hypothesized about or considered surveillance-oriented impacts, but the impact of the absence of visible hang tags would have been, I suspect, harder to anticipate and even harder to address. I wonder if there are better ways of predicting, and accounting for, such effects, short of actually deploying the system.

Labels: , , , , , , ,

Wednesday, May 09, 2007

what, panopticon? you don't say!

(via Ars) according to Viktor Mayer-Schönberger from Harvard's School of Government, argues that, due to the ways digital technologies are used to record tons of minutiae about people's daily lives, society is headed towards a Benthamist panopticon. this is not a particularly surprising argument, and others have touched on this theme quite a bit. the suggestion in this "modest proposal" is that technologies be built, by default, to forget. that is, logging technologies, e.g. those on the iTunes music store website that collect customer info, would automatically wipe that info after a legally prescribed period of time, say, a couple years. files created by digital cameras would self-delete after a set time period, where the user of the camera gets to determine the length of that time period. it's true, forgetting is a very important part of the way that our society functions, and plays a role in smoothing over lots of possible awkward social situations. when you stop forgetting, you stop being able to levy plausible deniablility arguments about, say, what you were or weren't doing when your significant other claims to have caught you emailing another lover. furthermore, if everything is remembered, then memories lose their preciousness. it doesn't matter that I remebered your birthday, because I didn't actually remember, my electronic daily planner emailed me to remind me that it was your birthday. I think there are all sorts of aspects of memory that cannot be emulated by digital technologies, especially those parts of memory associated with subjective experience, so I doubt that the preciousness of human memory will ever be totally eroded.

the problem is, I'm just not convinced that what we need is a technological solution to this technology-induced problem. when the technology of writing was introduced, it fundamentally changed the way human memory functions. no longer did we live in an world of fleeting and ephemeral spoken word, but we could capture and preserve that word. print technologies only further reified the word as an object rather than a spoken event, and remembering became less important. were there similar debates when writing came about? indeed, Plato argues through Socrates in Phaedrus that writing, among other things, destroys memory because it allows things to be written down rather than simply remembered, and that writing is inhuman because it does not allow for the natural give-and-take of verbal communication. similarly, with the advent of pocket calculators, teachers and parents argue that children's mathematical abilities would be dulled by their reliance on the calculator as a crutch. in Orality and Literacy, Ong argues that while these things may be true, in the case of writing, by not having to remember everything, humans were able to engage in previously unachievable analytic thought. science, he argues, would not have been possible without writing. Ong goes into a much greater exploration of the subject in his book, as well as making some conjectures about the potential impacts of digital technologies (some of with I quite disagree with). it's worth the read if you're interested in such things.

back to the matter at hand, I'm not arguing that we need to just sit back, accept the fact that everything is remembered, and figure out how as a society we are going to adapt to this change. I would agree that computational systems are fundamentally different than the technologies of writing and printing with respect to memory. namely, writing and print allow us to record things, but digital technologies enable retrieval, and at continually improving speed and accuracy. thus, while we might have been able to remember things externally with books, search-type technologies enable an entirely new form of access to these external memories. essentially, the question becomes, how do we decide what gets remembered, and when do we decide to remember it? Gillian Hayes has done some really interesting work on systems that constantly archive everything, for example social interactions in a public space, but automatically delete the archives after 30 minutes if no one says, I want to remember that. her work is really top notch, and I highly recommend checking it out. while it might not always be possible to know that you want to remember something until after the fact, it certainly has benefits over the common alternative. that is, the approach of archiving everything, but only allowing people to find something specific for which they are looking. this later take leans much more on the side of the panopticon, but you don't run the risk of accidentally not remembering something important. neither approach is perfect, but I think both are better than devices that forcibly, automatically forget after a specified amount of time.

Labels: , , ,