On the dangers of social networking, and of not social networking
Last week, I happened across an essay collection by the name of What Is Your Dangerous Idea? (2007), edited by John Brockman. The eponymous question, originally asked by Steven Pinker, asked contributors to Edge for ideas that "are felt to challenge the collective decency of an age".
Many of the contributors discuss ideas that they themselves appear to be comfortable with, but might seem threatening to more traditional thinkers. Scientific materialists, for example, have long been used to the idea that there is no soul, however terrible this might seem to more spiritualist thinkers. So I got to wondering not just what ideas might seem dangerous to society at large, but also what ideas might seem dangerous to me.
I'm sure there are plenty of ideas that threaten both society and I — like, God exists and he's not very happy with what we're doing — but I'd like to stick to the topic of this blog. As it happens, I found myself in a discussion about social networks — primarily LinkedIn — with some work colleagues at around the same time I read the book.
My dangerous idea in this respect is that social networks support an illusion of connection representing nothing more than the mindless clicking of buttons. Facebook and LinkedIn build an audience based on our need to feel connected, and the feeling that it is rude to say "no" to connection requests. They sell this audience to their advertisers, and the advertisers to sell their products to us, all without actually connecting anyone.
The dangerous idea to me is the converse one that users of social networks are, in fact, using these tools to build significant relationships, and that I've cut myself off from society and opportunity by refusing them. One of my colleagues, for example, claimed that many jobs are advertised only on LinkedIn, and I've read elsewhere that (some) recruiters rely on LinkedIn to fill positions.
Probably — and possibly hopefully — the truth lies somewhere in between. Perhaps some people successfully create or maintain relationships using Facebook (probably in conjunction with other tools), and perhaps some people find jobs using LinkedIn. But not all on-line connections are equal, and some are surely so superficial as to be meaningless. Nor is Facebook the only way of maintaining a relationship, or LinkedIn of finding a job, allowing each of us at least some freedom to choose the tools that best suit our individual needs. If it were otherwise, I think the only people who wouldn't be endangered might be Facebook and LinkedIn.
On filter bubbles, within and without
I recently read Eli Pariser's The Filter Bubble (2011), which discusses the potential for highly-personalised news feeds and search results to trap users in a "filter bubble" from which they can see only what news and results support their existing world-view. Cass Sunstein actually postulated that this might happen some time ago in Republic.com (2002), but Pariser updates the argument for ten years of advances in recommendation and personalisation technology.
By coincidence, my local university library happened to have Tim Dunlop's The New Front Page (2013) on its "new books" shelf around the same time. Dunlop's book is primarily a chronicle of his adventures in political blogging and the traditional media since the word "blog" was coined, but he does spend a little time discussing Sunstein's thesis. Dunlop points out that Sunstein's original analysis was conjectural, and that political bloggers since 2002 have, in fact, read and linked to the blogs of their political opponents.
To judge by the comments sections of opinion sites like The Drum and The Conversation, Dunlop is probably right as far as he goes: whatever the political alignment of an article, plenty of commenters of a competing alignment can always find time to criticise the article. That's not to say that the comments are necessarily insightful or constructive, or even that the commenters have actually read and understood the article: The Drum, especially, features plenty of mindless repetition of party lines and dogma. One is tempted to observe that, while Dunlop might be right about the motions, Sunstein was right about the end result.
Thinking about the news that arrives in my inbox, and some of the thoughts I've had about the computer industry in writing this blog, I wonder if politics is actually the least likely subject to end up in a filter bubble. Opposing political forces are at least aware of each other's existence, even if it's only to hold each other up as bogymen. But a computer scientist (for example) constantly surrounded by news about computers can easily forget that the computer industry is but one of numerous industries and agencies that contribute to modern society being what it is. Hence the mutual incomprehension that arises when one industry's orthodoxy conflicts with another industry's orthodoxy.
So I think there's an argument that we're as likely to build a filter bubble by ourselves as we are to have one built for us by technology. All that dogma on the The Drum is a case in point: the critics have the opportunity to engage with an article in a meaningful way, but many simply choose to re-state a party line. Even supposedly sophisticated communications theorists sometimes like to interpret the world through a one-dimensional lens, be it class or race or gender or sexuality or technology. I'm yet to meet a communications theorist offering an "industryist" analysis of the media, but many of us might be doing it in our own amateur way by being bound to the fate of the industry in which we work.
There's still only one world that counts
I've just finished reading Edward Castronova's Synthetic Worlds (2006), which is something I probably ought to have done some time ago. Reading it seven years after its publication, however, reminded me that synthetic worlds — notably Second Life — seemed like big news in the computer community at around the time that Castronova was writing. I remember being told that major companies were opening stores in Second Life, luminaries were holding press conferences there, entrepeneurs were making money there, and that anyone who was anyone would shortly be living, at least in part, in a synthetic world. Yet I don't hear much about Second Life or any similar world anymore.
The worlds themselves are still there and, presumably, making a living for the companies that develop them. But neither the media nor the conversations in which I'm involved have much to do with them. Was I, in 2006, hanging around a bunch of starry-eyed gamers unaware that not everyone was interested in their hobby? Is the media still not taking computer games seriously, as Castronova suggests in his introduction to Part II? Has everyone disappeared into a synthetic world, leaving me wandering alone on the outside?
In both the mainstream media and in conversations of which I'm a part, the giants of the computer industry aren't synthetic worlds of the kind that Castronova wrote about, but web-based tools like Facebook, Twitter and Google. And, to go by the numbers, rightly so: according to Statistic Brain, Facebook has over 1100 million accounts, Twitter has over 550 million accounts, and Google responds to over 5000 million searches per day. The largest synthetic world, World of Warcraft, had a comparatively measly 12 million subscribers at its height.
If there are synthetic worlds to which humanity is migrating, as Castronova puts it, they're surely Facebook and Twitter. The home pages of Second Life and World of Warcraft themselves sport those ubiquitous offers to "like" them on Facebook and follow them on Twitter.
I can think of several possible explanations. Firstly, Facebook and Twitter are free, where the synthetic worlds studied by Castronova ask for subscriptions. Secondly, the user base of game-like synthetic worlds is fragmented into numerous followers of different worlds, while Facebook and Twitter completely dominate their markets.
Lastly, though, I wonder if synthetic worlds have themselves met the fate of virtual reality identified in the appendix to Castronova's book. As Castronova has it, the researchers behind virtual reality originally supposed that virtual worlds would be created by completely immersing the users' senses in computer-generated stimuli. But it turns out that relatively crude representations of characters and landscapes on an ordinary computer are good enough to keep users' minds in a synthetic world. But maybe most of us don't even need those crude representations, at least not most of the time: our needs are adequately met by augmenting the real world with web profiles and instant messaging. After all, it's the one world from which we cannot migrate.
Free flows of information [about someone else]
After the fuss surrounding PRISM last month, I was bemused to find the historical pages of the July 2013 issues of IEEE Computer reporting that, thirty-two years ago, "worldwide protectionist legislation is threatening the free flow of information across borders". This lament is attributed to W. Michael Blumenthal in the July 1981 issue of the same magazine (p. 115), who was then the chairman of the Burroughs Corporation (now part of Unisys).
How things change! I thought. In 1981, Blumenthal feared that governments might enact legislation preventing the free use of private data. In 2013, computer enthusiasts fear that governments themselves might be making free use of private data.
Reading the original article in its entirety, I realised that the two attitudes probably aren't as contradictory as they first seemed. I take Blumenthal to be referring to computer companies' ability to use data as they please, while modern critics of PRISM are referring to the government's ability to use data as it pleases. The two sentences in the previous paragraph are perfectly consistent when interpreted in the light of self-interest: the computer industry would like to do as it pleases with whatever data it can collect, while it has nothing to gain from law enforcement agencies' use of similar data. For law enforcement agencies, it's the other way around.
I think just about all credible systems of ethics, justice and law try to resolve this problem by demanding some variant of the Golden Rule: treat others as you would have them treat you. Or, that everyone is equal before the law. In a privacy context, I may not have anything immediate to gain from someone else's use of data, but it would be irrational (or at least egomaniacal) for me to deny someone else use of data in a way that I believe I'm entitled to do myself. If I think beyond my immediate self-interest, I see that I do have an interest in allowing other people to make use of data insofar as it provides a moral basis for my right to use data in the same way.
Computer companies and law enforcement agencies, though, differ markedly from individuals and from each other. Law enforcement agencies do plenty of things that would be regarded as vigilantism if I did them myself, and I have no conceivable need to sell customer information since I'm not an ad-supported business. Yet I appreciate enforcement of the rule of law and (sometimes) ad-supported services, so it's not so simple to say that I should treat them as I would have them treat me. So perhaps we need something a little more nuanced.
I was recently introduced to the work of John Rawls, who proposed that a "just" system of laws is one that would be agreed to by people with no knowledge of what social position they were going to be born into. I wonder if computer companies, law enforcement agencies and the rest of us could learn from similarly pondering how we would establish rules for the sharing of data without any knowledge of the industry in which we were to be engaged?
One man's enthusiasm and another man's inanity
This quarter's issue of IEEE Technology and Society presents the results of a survey of blog entries concerning tablet PCs conducted by Efpraxia D. Zamani and colleagues. Two things about the survey struck me immediately: that "most of the bloggers hold upper level managerial positions", and that nearly all of the material quoted from their blogs seems rather inane. Zamani et al. could almost be writing a parody of Dilbert's Pointy-Haired Boss.
To be fair to upper-level managers, Zamani and colleagues compiled their material in a way that seems likely to select only the most inane stuff: they searched for "blog" and "iPad", and discarded any technical reviews. That is, they sought out casual writing about iPads and explicitly ignored rigourous reviews of the technology. We can still hope that most upper level managers actually have better things to do than write uninsightful observations like "it was ultra-convenient to just flip out the iPad ... without having to whip out a laptop or projector" (p. 76), which I would otherwise expect to find only in mediocre undergraduate essays. What, exactly, is the difference between "flipping out" and "whipping out"?
Whatever the merits of the bloggers' own observations, Zamani and colleagues identify a euphoric attachment to technological devices that seems totally alien to me. There are plenty of devices that I find useful for one purpose or another, and occasionally I might even mention so in conversation or on this blog. But I don't think I'd ever use words like "love", "passion" and "excitement", as Zamani and colleagues do in the last part of their article (p 78). I don't, for example, feel any loss when going for days or even weeks without my phone or computer when camping or travelling. (I do eventually wonder if any of my friends sent me any e-mail, though.)
In part, I guess this reflects my engineering background. For me and other engineers, technological artifacts are simply the end result of sound engineering principles. If we get excited about anything, it's the cleverness and power of the principles themselves. For non-engineers, however, technological artifacts can be magic boxes to be marvelled at in their own right.
Perhaps I'm also just not one to feel attached to non-human objects. Similar enthusiasm about cars and pets, for example, leaves me feeling cold. I do feel sentimental about objects I've owned for a long time, and I hate throwing things out. But I can't imagine myself writing a blog entry praising the guitar I've owned for twenty years (but rarely play), or chronicling the life and times of my once-sturdy pair of cargo pants that were torn beyond repair during a hike last month.
I certainly can't imagine anyone wanting to read such a blog entry. But I guess iPad enthusiasts probably don't find my actual blog very interesting either.
