I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard

Summary and Conclusion

2015-10-15 by Nick S., tagged as philosophy

I've gone two whole months now without adding any new entries, partly because I've had a busy semester and partly because I haven't come upon any topics on which I've felt I had anything new to say. I'm also about to take up a new position, at the Singapore Institute of Technology, which will mean a change in country and in my day-to-day work. In view of all this, I've decided it's time to close the blog.

I don't mean to forget about the things I've written about here, and I hope I'll be able to continue thinking and writing on them in future. But I expect that the new position will give me plenty to do in other areas as well, and it may take me some time to settle into a new pattern of endeavour. If I do return to blogging, or any other form of writing, I expect I'll be doing it as a faculty member at the Singapore Institute of Technology.

Writing the blog has refined my thinking on a lot of topics, led me to some interesting ideas, and spun off a couple of articles for The Social Interface and The Conversation. There's a list of categories (as my blogging software calls them) on the right-hand side of the screen, but I'd also like to note a few of the major points here, in no particular order, along with links to some of the entries that have had the most impact on my thinking.

The computer industry is not special. Computer technology is just one of many modern and ancient technologies that can empower us and make our lives more comfortable, and its industry has no claim to special favours from other industries, or special exemptions from economics, society or law. I've most often written about its interaction with the creative industries, frequently portrayed as greedy and/or clueless stick-in-the-muds frustrating an imagined right of helpless consumers to have entertainment delivered to their computers on their own terms.

Critical computing. Both techno-utopian and techno-dystopian narratives portray humans as helpless tools of computer technology, one for the better and the other for the worse. In reality, we do have choices about the way that we interact computing devices, but it is easy to forget to exercise them. It might be because we're besotted with the latest fad, or pursuing a fast-and-easy route that ends in superficial simulacra of what we actually want to achieve. We need to put effort into learning what our devices can and can't do, and how to best apply them to our needs.

Privacy is not secrecy is not freedom. The freedoms enjoyed by the citizens of liberal democracies are protected by pluralistic societies and a rule of law upholding the values of such societies. The freedom to be oneself in secret is not freedom at all, both states and citizens are accountable for what they do, and much ink continues to be spilt indulging fantasies of totalitarianism that make little contribution to the debate.

Where to now?

While I'd like to think that this blog and its spin-off popular articles have gone a small way towards the contribution with meaning to people outside computer science that I wrote about when I created the blog, I'm yet to publish a peer-reviewed article or anything with similar kudos. I still like to think I could do this, and I have a few ideas drafted, but I expect I'll take some time to settle into my new position before pursuing them further.

I also hope to continue practising computer technology, not just writing about it. Reading some reviews of the latest technology in a recent edition of APC Magazine, I was struck by how many of the products seemed like toys for the wealthy, likely to make a marginal difference to the quality of life of people who can afford to buy things like augmented-reality goggles and high-definition televisions. But of course the products mentioned were only a small subset of all the products out there, and I'd like to think that there are less banal uses for technology that I could turn my hand to.

And so here I hang up my keyboard, until that next article or program.

On science fiction and political economy

2015-07-28 by Nick S., tagged as artificial intelligence, prediction

Continuing the science fiction theme from my previous entry, I recalled an interview in which Iain Banks described "the Culture", the society in which most of his science fiction novels are set, as his utopia. One of the distinguishing features of the Culture is its population of artificial "Minds" that perform tasks from waiting on the biological citizens of the Culture, to commanding mammoth spaceships, to governing the whole society.

At first I wasn't convinced — I have no desire to add any extra arms, as does the protagonist of The Hydrogen Sonata (2012), for one — but, having considered some of the alternatives over my past few entries, I'm coming around to the idea. Banks' Minds are a pretty friendly, helpful and cooperative bunch, far from the totalitarian overlords featured in the Terminator movies and the incomprehensible tools of an in-the-know aristocracy imagined by Tyler Cowen and Frank Pasquale. The human(-like) characters don't need to work, but give purpose to their lives through elaborate hobbies like games of strategy (The Player of Games, 1988), absurd musical instruments (requiring those extra arms in The Hydrogen Sonata), and carrying out the alien missions that drive most of the novels' plots. (They also take plenty of time out from their hobbies for parties, sex and drugs.)

Of course Banks doesn't describe the economic or political mechanisms by which all this comes about. The same could be said of Star Trek, in which future humans are imagined to spend their time "improving themselves" rather than working for more material wealth.

Come to think of it, I can't recall science-fiction-inspired technology pundits like Project Hieroglyph or Brian David Johnson's "Science Fiction Prototyping" column in IEEE Computer saying much about economic or political mechanisms, either. Like most people, perhaps, they're primarily interested in how particular imagined technologies might impact society. This might be a fine thing to do, but the thoughts above lead me to wonder if the world could also use some "political economy fiction" exploring something broader than adventures with a particular technology or scientific theory.

Perhaps any such fiction is destined to sound like an old-fashioned utopia, and the term "utopia" has become something of an insult to describe a narrow idealistic vision that suits the interests of its proposers while ignoring the interests of everyone else and being generally impractical. My differences with those I describe as "techno-utopians" in particular were a large part of my motivation in beginning this blog. Still, in the essay that inspired Project Hieroglyph, Neal Stephenson laments what he perceives as a failure to pursue big technological ideas like space travel and robots. But if pursuing space travel and robots is interesting and important, why not our political and economic institutions as well?

Some thoughts on the Butlerian Jihad

2015-07-21 by Nick S., tagged as artificial intelligence, employment

Continuing to think about automation and employment while constructing my last entry, I recalled the "Butlerian Jihad" that Frank Herbert imagines in the history of Dune (1965). In the far distant future in which the novel is set, the Jihad has resulted in a ban on machines that replicate human mental functions. This ban manifests itself in Dune in form of human "mentats" trained to perform the computational work that we now associate with machines.

It's been some time since I read Dune, and I don't remember why the Butlerians went on their Jihad, or if Herbert gives a reason at all. But if they feared that thinking machines might make humans redundant, or at least spawn the monumental inequality envisaged by thinkers like Tyler Cowen and Eryk Brynjolfsson and Andrew McAfee, could the Butlerians have a point? I imagine that orthodox economists and technologists, including those I've just mentioned, would simply dismiss the Butlerians as a form of Luddite. But why should we accept machines if they're not doing us any good?

Part of the problem with any such jihad, aside from the violence associated with it in the novels, is that what makes us human is not so clear-cut or obvious as is traditionally presumed. Evolutionary biology argues that we are not so different from other animals, work in artificial intelligence is continually re-drawing the line between computation and what we think of as "intelligent", and neurologists are yet to identify a soul. The introduction of mentats illustrates the computational part of the difficulty: in ridding the galaxy of machines with human-like capabilities, the Butlerians introduced a need for humans with machine-like capabilities. Brynjolfsson and McAfee (I think) also make the point that it isn't just in mental powers that humans distinguish themselves machines: humans remain better at tasks requiring fine manual dexterity, meaning that robots aren't yet ready to replace pickers and packers, masseurs, and all manner of skilled tradespeople. Any would-be Butlerians have some work to do in defining exactly what it is that they object to.

A second problem is that people differ in what they want to do themselves, and what they want automated. I enjoy making my own beer, for example, but plenty of other people are happy to buy it from a factory that can make it much more efficiently. On the other hand, I'm usually happy to have my camera choose its own settings for focus, shutter speed and the like, where I imagine a photography enthusiast might be appalled to leave such things to a machine. Should I smash breweries, or photographers smash my camera, to preserve the need for the skills that we like to exercise ourselves?

Of course I don't need to smash breweries in order to brew my own beer: I have a non-brewing-related income that leaves me with the time and resources to brew my own beer even if no one else will pay for it. This brings me back to a point I've already come to several times in thinking about automation and work: to what degree should our worth and satisfaction depend on paid employment at all? If machines allowed us to reduce the amount of work we do, freeing up more time and resources to do what we actually want to do, would we have any reason to fear the machines?

How can engineers approach a race against the machine?

2015-07-19 by Nick S., tagged as dependence, philosophy

Not long after struggling with how to approach work and automation last month, I happened to pick up Nicholas Carr's The Glass Cage (2014) and Eryk Brynjolfsson and Andrew McAfee's Race Against the Machine (2011), which cover some of the same territory. My perspective is similar to Carr's in that we both acknowledge that machinery has brought us many benefits — I even make my living from building more machines and teaching other people to do the same — but remain nonetheless wary about uncritical adoption of machines that at first seem handy helpers, but ultimately prove to be inadequate replacements for human skills and/or straightjackets from which we cannot extricate ourselves.

So what should we be automating, what should we be leaving alone, and how do I reconcile my profession with the possibility that the machines I build will transfer wealth and dignity from the people who used to do the work, to the owners of the machines? As Brynjolfsson and McAfee note, the orthodox economic view is that new jobs have appeared to replace the automated ones — and we've done pretty well by this in the long run — but there's no known principle of economics that assures us that this will proceed always and forever.

The first principle that occurred to me was to recommend that we adopt machines only when they enable us to do things that could not have been done without them: new technologies must be more than faster ways of performing existing work. This also fits with my doubts about the pursuit of fast and easy as a path to satisfaction.

This principle has at least one flaw, obvious to anyone familiar with arguments in favour of economic growth: automating a specific task that could be done by a human may free that human to do something that he or she couldn't do before for lack of time, energy or resources. The orthodox view I mentioned earlier depends on exactly this kind of process. For this reason, I don't think the principle could be sensibly applied on a task-by-task basis.

Nonetheless, the principle gets at what we surely want from machines in general: why bother with them if they simply leave us doing the same things as before (even if we can do them faster)? What's more, Carr points out that being "freed up" isn't much consolation if it means being unemployed and without access to resources that might enable the victim to make use of their notional freedom.

That I can't apply the principle on a task-by-task basis, however, makes pursuing it very difficult: I have no way of determining the worth of any particular engineering project in light of it. (Not that I often get to make such determinations: my need to pay my bills means that what I do is dictated as much as by what other people are willing to pay for as by my private views of what would make the world a better place.) Perhaps the principle isn't hopeless, but it requires a better formulation than what I'm able to come up with at the moment.

Paywalls and adwalls re-visited

2015-06-28 by Nick S., tagged as commerce, privacy

Kat Krol and Sören Preibusch discuss "effortless privacy negotiations" (pp. 88-91) in the May/June 2015 issue of IEEE Security & Privacy. In doing so, they (inadvertantly) address some of the questions I wondered about in an article for The Social Interface last year — most notably, whether or not people would be willing to pay for services of the sort now provided by advertising, if it meant that they could obtain the services without handing over data to advertisers.

According to the research cited by Krol and Preibusch, most people would not, but a significant number of people would. I think I suspected as much when I wrote my article, but Krol and Preibusch propose a slightly different (but perhaps complementary) explanation for why they wouldn't: most people value the tangible and immediate gain of access to a service more than the nebulous and future risks of handing over private data.

In the same issue of Security & Privacy, Angela Sasse scolds security nerds for "scaring and bullying people into security" (pp. 80-83) with fearsome dialogues intended to warn people of the risks — again, mostly distant and nebulous — that they face in clicking on links that don't meet the approval of the security community. The same might be said of privacy nerds who demand that privacy policies be read and rejected if readers can imagine misuse of the policy.

Whatever the explanation for people who won't pay, those who would pay might wonder: where do I go if I want to search the web or join social media, but I don't want the ads? None of Google, Facebook, or Twitter will take my money!

Krol and Preibusch mention one (experimental) solution from Google, for whom Preibusch works: Google Contributor. According to Contributor's web page, subscribers to the service will see "pixel patterns" or "thank you messages" instead of ads on participating web sites. (This sounds a bit kludgy but I guess it's a start). But the article focuses on negotiation between users and service providers.

I've seen proposals for negotiating privacy settings before, but never found them particularly convincing: why would anyone agree to anything other than handing over the minimum amount of information required to get the job done? Krol and Preibusch identify the point I was missing: the participants need to negotiate not just the privacy settings, but the service they get in return for them. So those who'd rather pay than see targeted ads, for example, could negotiate untargeted service in return for a subscription. (This might not just be about privacy: my main objection to advertising isn't that I'm worried about the data collection involved, it's that I find it irritating.)

The title of Krol and Preibusch's article identifies the obvious weakness in all this negotiation: it takes a lot of effort to both provide and use such a flexible service. Of course reading and understanding current privacy policies requires a fair bit of effort too, which is partly why they remain largely unread and ununderstood. (The other part is that the reader can't do anything about them anyway, for which negotiation might offer some remedy.)

Still, well-designed computer systems can take a lot of the effort out of things that might otherwise be tedious and time-consuming. Krol and Preibusch don't describe any particular solutions; their article is more of a call-to-arms. I don't kow if negotiation is the solution — I'm at least as interested in Google Contributor, which has the advantage of existing — but Krol and Preibusch have at least renewed my interest in something I'd previously dismissed.