On the structure of computing revolutions
I recently read an article mocking its own authors for failing to recognise that the iPhone (or some particular version of it) would instigate a revolution. Unfortunately I didn't record where I read this, and I haven't been able to find it again after later thinking about what constitutes a "revolution", and what it might feel like to live through one.
My immediate reaction upon reading the article was: are you sure you weren't right the first time? I, at least, don't feel like I've been through a revolution any time in the past ten years, or, indeed, my entire life. Sure, technology has steadily improved, but I've only ever perceived it as "evolution". I have no doubt that someone catapulted into 2012 from the time of my birth in the 1970's would find much to be amazed about. But, having lived through all of the intervening years myself, I had the much more mundane experience of seeing the changes one product at a time.
This begs the question: how much change is required, and how sudden does it need to be, to constitute a "revolution"? When talking of the history of computing to my computer systems students, I often talk of "trends" from analogue to digital, from stand-alone computers to networked ones, and from single-core to multi-core CPUs. I say "trend" because I perceive the changes as a gradual process of older products being replaced one-by-one by newer products. But proponents of the iPhone (or digital or network or multi-core) revolution presumably perceive the changes as one big leap from existing products to a spectacular new entrant. (Either that, or they use the word "revolution" to mean "any perceptible change".)
Now, many small changes may add to up to a big one. Someone of my mind born in Britain in 1800, say, might have observed machines or factories appearing one at a time over his or her life time. But that person's lifetime now seems short compared to the span of human history, and we consequently refer to that period as the Industrial Revolution. Still, I suspect that future historians will be looking at more than iPhones when they decide what to call the current period.
One of my students foreshadowed the taxonomic problems awaiting future historians when he observed to me that the articles he had been reading disagreed about what era of computing we currently enjoyed. I forget the exact list of candidate eras, but one might have been the "mobile era" and another the "network era", and so on. Off the cuff, I suggested two explanations: firstly, that his sources were talking crap, and, secondly, that his sources were talking about two different aspects of computing.
The two explanations might not be mutually exclusive. Perhaps the iPhone revolutionised mobile telephony/computing for some definition of "revolution", but I didn't notice this revolution because I do relatively little telephony and mobile computing. But the iPhone didn't revolutionise other aspects of computing -- let alone biotechnology or space travel or any of numerous other technologies of the modern period -- so attributing a broader revolution to it would seem to be a load of crap.
Why is it so boring to use the right tool for the job?
In thinking about both tablet PCs and Alone Together over the last month or so, I noted the paradigm of using the right tool for the job. To recommend using the right tool for the job seems fairly banal, but I wondered if my perceived need to recommend it reflects the apparent existence of a contrary view in which there exists, or will shortly exist, some universal tool appropriate to all uses.
Henry Jenkins refers to this contrary view as "the black box fallacy" in his book Convergence Culture. I find it hard to identify any particular person who propagated the black box fallacy -- or dream, if you disagree with Jenkins and I -- and I can't imagine anyone owning up to a statement as simplistic as "device X is all we will ever need". Yet, the black box idea seems implicit in utopian (and dystopian) narratives like that implied by questions like "Have digital tablets become essential?"
To be fair to anyone anticipating the arrival of a black box, there are presumably some limits in mind, albeit unstated and vague. Surely no one foresees a single black box performing all the functions of a computer, a vehicle, an oven, a refrigerator and a washing machine! But, even if we restrict the imagined functions of a black box to those currently performed by microelectronics, why expect a single box when there is plainly a whole host of different boxes on the market?
I suppose that the hype and excitement surrounding a new device tends to drown news of existing devices, giving a false and unintended impression that the new device is far more important and interesting than the old ones. Presumably not even the most enthusiastic supporters of smartphones or tablet PCs believe that such devices are about to replace server farms or home theatres, for example. But the features of server farms and home theatres are likely to be far from the mind of someone enthusing over the latest mobile device.
The gradations between phones, smartphones, tablets, netbooks, laptops and desktops are more subtle, though. If desktop computers were only introduced in 2012, after we had been accustomed to mobile telephony and portable computing, could we be so amazed by their computing power, large screens and keyboards as to forget that they aren't very mobile?
Alone together and feeling used by communication tools
My recent difficulties with social networking inspired me to read Sherry Turkle's Alone Together: Why We Expect More from Technology and Less from Each Other. The book's subtitle neatly captures my dissatisfaction with LinkedIn and other supposedly social media: it's very easy to click a button that creates a record in a database stating that I'm "connected" with someone, but there's a whole lot more to do if I want to form and maintain a significant and effective relationship with that person.
Turkle makes a distinction between "performance" and "friendship". In the first half of the book, "performance" refers to robotic toys that are programmed to enact rituals that children expect from conscious beings: the robots say they are happy, hungry, etc. even though they (presumably) don't experience such emotions like humans do. In the second half of the book, "performance" refers to manipulating text messages and Facebook profiles to present the desired standards of coolness, connection and caring. She believes that
sociable technology will always disappoint because it promises what it cannot deliver. It promises friendship when it can only deliver performances (p. 101).
Turkle acknowledges critics who point out that we are always performing to one degree or another, in that we craft different personae for friends, family, work, school and so on. And how does one distinguish "authenticity" from a highly sophisticated and nuanced performance anyway? Of course Turkle contends that the performances exhibited by current robots and social networking sites are hopelessly inadequate to fully capture human emotion and relationships, and I find it hard to disagree.
It is, of course, conceivable that improvements in technology will one day overcome such inadequacy. But what to do in the mean time? Turkle doesn't recommend eliminating robots and social media and, indeed, seems to be quite comfortable with handing them out by the dozen as part of her research.
For me, the answer has to be about recognising the capabilities and limitations of particular media, employing them for what they are good at and dispensing with them for what they are not. The saddest stories in Turkle's book involve people feeling psychologically or socially compelled to use some tool despite its evident incapacity to meet the person's needs. Someone whose only tool is a hammer, as the saying goes, struggles with tasks that don't involve nails.
Most of the people in Turkle's studies are young -- children or teenagers -- and it could be that they simply haven't yet learned which tools work best for which tasks. Even older people struggle with how best to use new tools. Perhaps it isn't so surprising that things go awry in these situations.
Towards the end of the book, Turkle writes about people who have realised that the tools they have been using aren't working for them, and have consequently developed strategies like scheduling one-on-one phone conversations and deleting their Facebook profiles. Some of these strategies are fairly crude, but I think they demonstrate an important (and possibly under-rated) mind-set: a determination to make technology serve one's needs in place of passive acceptance of what technology happens to be in vogue.
Hackers? In this day and age?
The (Australian) ABC's news web site recently featured a radio discussion between two unidentified persons regarding anonymous publication of material on the Internet. I'm not familiar with the story that sparked the discussion, but the conversation caught my attention for two reasons. Firstly, one of the participants referred several times to classical computer hacker attitudes that I had thought had vanished, or at least been seriously marginalised, by the popularisation of the Internet. Secondly, the other participant noted that certain "rights" supposed to exist by such hackers (in this case, anonymity and taking any file available for download) do not actually exist in law.
My graduate certificate in communications had me studying a lecture that, in part, presented the romantic ideal of computer hackers as freedom-loving individuals bent on understanding, using and, if necessary, subverting computer technology for some greater purpose. I gather that many of the students were not particularly impressed with this portrayal, possibly because they identified "hackers" with virus-writers, identity thieves and spammers. While I don't think either the lecture or the original users of the word "hacker" intended it to mean "computer criminal", I also think it's very naïve to equate freedom with the power to use technology in whatever way one is capable of doing.
My own response to the lecture described the hacker mentality as a "might-makes-right philosophy that equates freedom with one's technological power exercise it". Inspired by a related observation in David Brin's The Transparent Society, I postulated that competitions of technological power would, in fact, be won by well-resourced organisations rather than a few lone hackers.
Sure, classical hackers have won the occasional battle like reverse-engineering the Content Scrambling System for DVDs or jailbreaking iPods. But I'm pretty sure that Google, Apple, Microsoft and the rest ultimately have a far mightier influence over our electronic devices than Jon Lech Johansen, Richard Stallman or even Linus Torvalds. Meanwhile, the public's image of a "hacker" is largely informed by the kind of lawless computer whizzes they encounter most often: spammers, phishers, data thieves and authors of malware.
The law recognises this, and curtails rights like freedom of action and freedom of speech where, in the view of the law-makers, one person's exercise of those freedoms would interfere with someone else's freedom or well-being. So my freedom and ability to write e-mail software, for example, does not entail the right to e-mail fraudulent advertisements for Viagra to every e-mail address I can download.
Perhaps an honest-to-God cyberlibertarian would say that I should have the right to send whatever e-mail I like to whomever I like. But would he or she appreciate the same activity from Google, say, who possesses vastly greater reserves of information and software development skill than I?
Building a social network, one auto-generated message at a time
Today I received an invitation to join ResearchGate, which I gather to be a kind of social network for scientists. I'd never previously heard of ResearchGate and, almost certainly, they'd never heard of me. I nonetheless warranted an invitation because I co-authored a number of papers with someone who had already enrolled.
I mostly reject automated invitations of this sort, in part because I resent web sites expanding their business by taking advantage of my relationship with a third party and in part because I like to think that my real friends would be bothered to write real e-mails. But my experience of LinkedIn is my greatest motivator.
At the time I received my LinkedIn invitation, I had no experience of such sites and it seemed worth a try. But I never found anything useful I could do with it, and I gradually realised that my LinkedIn page was a graveyard of ex-colleagues who had sent me connection invitations but with whom I no longer actually communicated (via LinkedIn or otherwise). I began to wonder if sending a LinkedIn invitation was a tacit declaration that "I will never talk to you again."
After a few years of this, I began replying to invitations with a personal e-mail explaining that I don't really use LinkedIn. In response, one of my would-be connections admitted that she didn't really use LinkedIn either, but she just felt compelled to click on the "Do you know?" buttons. I was already pretty sure that my own LinkedIn connections were a fraud, and my friend's message suggested to me that I'm not the only one. I've since deleted my LinkedIn profile, and I refuse all new invitations with an e-mail explaining that I don't use LinkedIn.
It still feels slightly rude to reject invitations, and perhaps LinkedIn members feel it would be rude to ignore the question "Do you know?" when they do, indeed, know that person. I wonder if we instead ought to feel rude for allowing Internet companies to exploit our relationships in order to build their customer bases, and to present false social networks built up by automated messaging and idle button-clicking?
