Re-inventing the wheel, I mean, human
I read David F. Dufty's Lost in Transit: The Strange Story of the Philip K Dick Android over the weekend, whose subject matter is plainly described by its sub-title. I found the book informative and entertaining in its own right, but reading about androids also reminded me of a rhetorical question I asked in the first entry in this blog: what is the purpose of creating human-like artificial intelligence when we have seven thousand million human intelligences already?
I, and most other academics, could probably produce of a long list of intellectual reasons for such a pursuit, ranging from better understanding of how humans interact with other animate objects to illuminating the concept of "intelligence". But some of the folks in Dufty's book (and elsewhere) clearly think that human-like artificial intelligences have more immediate practical uses.
David Hanson, the sculptor who championed the project and built the android's head, argues that "when we interact with things in our environment we interact more naturally, and form more natural relationships, with things that look like us" (p. 73). The surrounding text suggests that Hanson was also thinking about the more academic reasons outlined above, but I think the assumption in this quote deserves some scrutiny.
An essay that I read some years ago, and whose citation I now forget, disputed this kind of thinking using the example of cars. Nearly everyone can learn to drive a car, and we think nothing of it once we've got our licence. This is not, the essay points out, because cars look or behave anything like people, but because they have an interface suited to the task of controlling a motorised vehicle. Why expect that computers (or robots) should be any different?
Hanson might be correct in surmising that we interact more naturally with devices that look like us, at least in the sense that such interaction requires no skill beyond ones we acquire informally at a very early age — though I suspect that many people (Sherry Turkle, for one) would consider the concept of a natural relationship with an artificial being to be an oxymoron. But that's not to say that a hammer or refrigerator, for example, would necessarily be easier to use if it looked like a person.
I think Hanson means to imply that human-like interfaces are worth pursuing because they seem likely to be the most appropriate ones in at least some situations. I'm not yet convinced that human-like interfaces are the best way of interacting with anything other than humans, but maybe that's because I don't have any particular uses for androids.
