Imagining totalitarianism
The Conversation (amongst others) last week had plenty to say about "PRISM", with all of Philip Branch, Sean Rintel, Alan Woodward, Grant Blank, and Ashlin Lee and Peta Cook having something to say about the US National Security Agency's alleged programme to collect information from the servers of US Internet companies.
I found it curious that the criticisms levelled at this kind of surveillance are largely (though not completely) theoretical, in the sense that they don't much discuss actual instances of people suffering at the hands of such systems. A mention or two of Watergate seems to be about it, and that happened forty years ago.
Now, what constitutes "suffering" may be a matter of opinion. Does it do someone harm to be embarrassed? To be in the NSA's files? To be judged by information collected by Google and Facebook? And perhaps it's hard to find people suffering because such systems haven't been widely used in Western countries (though Lee and Cook's contribution suggests otherwise).
The orthodox view amongst those who write most about privacy seems to be that the collection of data is harmful in and of itself. The classical view amongst technologists, in particular, is that privacy consists of never telling anyone anything, and hence their fascination with technology like Tor and Bitcoin. It certainly seems sinister enough to imagine that there's some organisation watching one's every online move. After all, what good could such an organisation possibly do for the person being watched?
The answer is that, whatever conspiracy theorists might like to imagine, I don't think there are any organisations that collect data simply for the sake of it. Google, Facebook and the rest collect data in part to serve the immediate needs of their users and in part to meet their own business needs. The NSA and similar organisations collect data to serve what they perceive to be the public interest. To get worked up about the mere collection of data is to miss the point: the real question concerns the purpose the data is used for, and whether or not the benefits of this purpose outweigh the costs.
Making simple allusions to totalitarian states and Orwell doesn't answer this question. The problem with totalitarian dictators isn't so much that they spy on their citizens, it's that they persecute citizens who hold views disagreeable to the dictator. Indeed, any organisation that simply collected data for its own sake would just be a corporate variant of the oddballs that appear on Collectors.
Technology to the rescue
Apparently deciding to take a break from electronics for the month, IEEE Spectrum takes a look at agricultural technology for its June 2013 issue. Spectrum is sufficiently impressed with what it sees to predict the coming of an age of plenty with food for all, whatever food crises and starvation might be feared by less optimistic forecasters.
Keith Fuglie (pp. 20-26) leads the optimism with an article explaining his supreme confidence that agricultural technology will provide nutrition for everyone into the foreseeable future. Whether or not we're going to starve is a topic for a different blog, but I do want to comment on the technology-bound world-view apparent in Fuglie's article and many of the others that follow it.
From the standpoint of technological optimism taken by Spectrum's contributors, all problems can, must and will be solved by technology. While a technology magazine like Spectrum could be expected to focus on the technological aspects of its subject matter, technology-bound articles like Fuglie's do not even appear to imagine that solutions might also come from policy, design, economics, culture and other areas. It's technology or bust (but of course there will be no bust because technology is presumed capable of solving any problem).
One can imagine an engineer who, upon seeing a piece of litter beside the road, sees an opportunity to develop an army of rubbish-collecting robots. A city taking up this army could spend millions of dollars to free its citizens from the trivial hassle of putting their litter in a bin. Pro-robot councillors, I suppose, might argue that litterbugs will drop litter regardless of how cheap and easy the bin seems to tidier citizens, and the robots will completely solve the problem where civic virtue might only partially solve it. But that tells a pretty sad story of the cost of laziness and irresponsibility: one might say that the technology has improved, but the citizens haven't.
Still waiting for the digital economy to produce free stuff
Reuters recently reported that a US court had suggested that "many authors could benefit" from Google's plan to scan books. Some supportive commenters over at The Register quickly jumped at the opportunity to agree that Google knew what was best for those silly authors who resisted their work being posted without permission or recompense for the benefit of the computer industry.
To be fair, the second comment on The Register's article gets right to the point of whether it should be up to authors to decide how they publicise and exploit their work, or up to aggregrators like Google. Numerous businesses use free samples to publicise their work, ranging from fragments of bread in a dish at my local bakery to blasting out ad-supported music on national radio stations. But I'm pretty sure my local baker decides what and how much bread to put on the counter, not some multinational bread information aggregator that doesn't actually bake or sell any bread itself.
Google itself maintains that its service represents a fair use of the books that it scans. Fair use (and similar provisions in other countries' copyright laws) imply that there is some significant public benefit in the copying being done, and/or no significant harm being done to the copyright holder. It's certainly conceivable that Google could indeed do something in the public interest here, even if it serves its own interest at the same time. But that remains to be established by the court.
All this is beside the point, however, for computer enthusiasts desperate to believe that artists could thrive if only they would allow computer users to enjoy the fruits of artists' labour without having to pay for it.
A little while ago, a Conversation article from Karl Schaffarczyk directed readers to Birgitte Andersen's 2010 article Shackling the Digital Economy Means Less For Everyone in support of the notion that "it is widely accepted that people download music and other content due to the failure of the market to deliver what the consumer wants." I couldn't find any such statement in Andersen's article, much less any evidence for it. I did, however, find plenty of selective citation from the literature on the effect of copyright infringement on sales, and unsubstantiated assertions about "outdated business models" of the music industry and (unspecified) "revolutionary business models" of the computer industry, just as I've previously commented on for The Social Interface.
It may nonetheless be true that the market has not offered what Schaffarczyk, Andersen and other consumer-oriented commentators want, if what they want is free access to books and music. But a successful market needs to offer benefits to both the consumers and the producers, and no amount of inserting computer companies into the pipeline between them will make free labour appeal to producers.
Middle Eastern conflict traced to meddling search engine
The Australian Broadcasting Corporation recently reported that Google had recognised Palestine by replacing the name "Palestinian Territories" with "Palestine" on Google's page for the state (or whatever term Israel would prefer that we use).
I'm sure I'd be amongst the first to say that news outlets of all sorts — even the ABC — present plenty of stories that are of no real consequence to anyone. This particular non-story, however, brought me back to an issue that I also encountered in news outlets' description of a widespread cartographic error as an "IT glitch" due to its existence in Google Maps: why should it be newsworthy that Google repeats some information or decision made by its sources?
Google itself is quoted in the Palestine article as saying, modestly enough, that "We consult a number of sources and authorities when naming countries. In this case, we are following the lead of the UN." Google is an aggregator of information after all, not a creator of it. Yet Google's "recognition" of Palestine is news for the ABC.
The Register's coverage of the same story makes more of Israel's opinion that "this change raises questions about the reasons behind this surprising involvement of what is basically a private Internet company in international politics". Israel's reaction is arguably more of a story than Google's change itself. At the risk of having the Israeli foreign ministry releasing similar indignant statements about this blog, though, it begs the question of why the Israelis think it worth commenting on Google's application of a UN decision. (Their real argument is presumably with the UN, who made the decision in the first place, and I'm not going to go there.)
Google is a wealthy and powerful company, to be sure, and the actions of the wealthy and powerful are newsworthy enough in many circumstances. But are news outlets doing themselves or anyone else (bar Google) any favours by reporting as if Google is the final arbiter of all human knowledge and convention?
I recommend to my students that, when doing research, they seek out the original source of some item of information in order to critique and verify it. The difference between primary and secondary sources was, after all, a high school topic for me. But I suppose that they might feel justified in ignoring me when they see that they could get jobs reporting about Google search results.
Boldly going where many have been before
A couple of weeks ago, The Australian's higher education section quoted Anant Agarwal, president of edX, saying that "education hadn't really changed for hundreds of years" (27 March 2013, p. 26). I don't know which schools and universities Agarwal has visited over the past two or three hundred years, but the statement drew my attention to a tried-and-true technique of would-be revolutionaries: deny that anything that happened before today was of any consequence.
For those dreaming of the day that computers revolutionise education, university lecturers are apparently still getting about in black robes and discussing the finer points of Galenic medicine in Latin with their exclusively white male students. It makes me wonder who's really out of touch here.
Of course modern schools and universities also continue some practices that would be familiar to their mediaeval forebears, including the human teachers and lecturing and tutoring that I take to be the subject of on-line educational scorn. But perhaps there's a reason for this continuity: it works. Would anyone suggest that the Roman alphabet is due for a shake-up just because "it hasn't really changed for hundreds of years"?
Writing about older computer workers' difficulties with finding employment in her book Cyberselfish, Paulina Borsook speculates that older workers might be disadvantaged by the lack of excitement they show when presented with a new buzzword that looks suspiciously like technology they worked with ten or twenty years ago. So we're using thin clients to access our data in the cloud now? Sounds rather like the dumb terminals and mainframes that we covered in the history of computing discussed in my operating systems class last week. Unhampered by any knowledge (or at least experience) of history, younger workers impress by the excitement they show when encountering an idea for the first time.
Similarly, massive open on-line courseware seems so much more exciting if one has never encountered — or makes a habit of ignoring — the textbooks, video lectures, educational software and on-line learning management systems that existed before it. And Heaven forbid that any of our ancestors ever had a good idea.
