Dream jobs and nightmares at home
Today I read two articles expressing more or less opposite views (or at least hopes) of employment in engineering. On one hand, IEEE Spectrum presented its annual round-up of dream jobs, beginning with a lament that "unflattering stereotypes persist, and they're tired [and] out of touch with reality". The Register, on the other hand, reports the views of one Mike Laverick that you need a home lab to keep your job — that is, you need to be exactly the kind of technology-bound stereotype that Spectrum wishes to take on.
Spectrum is, of course, cherry-picking a very small number of individuals who have what it describes as "dream jobs" involving travelling around the world, working on exotic projects and/or making noble contributions to humankind. The Register might be more representative of the common mass of engineers working on (presumably) worthwhile but unglamourous projects for mundane employers in their home town. The commenters on The Register's article certainly sound a lot more like the lab-at-home folks than the high-flying Rennaissance men and women featured in Spectrum.
I don't have a lab at home, and, indeed, resent the notion that I ought to spend my spare time training up for a job in which I have high formal qualifications, years of experience and continue to work in day-in day-out. Laverick's attitude seems to me to pander to what I think are wrong-headed views of programming that prioritise familiarity with the latest buzzword over the fundamental engineering skills possessed by truly competent programmers. Yet buzzwords are doomed to come and go, and, I, at least, feel I have better things to do than pursue them in a lab at home. What other trade or profession demands that its members practice their craft not just in their professional lives, but in their spare time also? (One even hesitates to imagine the goings-on in the home labs of, say, surgeons and nuclear engineers.)
So I'm probably more typical of the readership of Spectrum than of The Register. Indeed, it isn't immediately obvious that Spectrum's dream workers have a better lot than I do (though I suspect they earn more money). Simon Hauger appears to do more or less the same kind of work that I and numerous teachers do, while Marcia Lee appears to does more or less the same kind of work that I and numerous other software developers do, albeit for a better-known company than most of us (the Khan Academy).
Perhaps the most important aspect about Spectrum's dream jobs is that they show some vision for engineering beyond engineering itself. While Laverick and his followers are beavering away in their home labs in pursuit of yet more engineering cred (or at least buzzwords), Hauger, Lee and the rest are thinking about how science and engineering serve education, art, adventure and human development. This might sound like a utopian dream, but, if engineering isn't serving something like this, why are we doing it?
One industry's red tape and another industry's system
The Australian (30 January 2013, p. 3) reported Tim Berners-Lee's recent visit to Australia under the headline "Inventor against net regulation". The article itself specifies that Berners-Lee, among many others things, had actually spoken against regulation by the United Nations in particular, and doesn't specify whether he had any thoughts on who, if anyone, should regulate it.
Anti-regulation pronouncements like that implied by the article's title never fail to have me rolling my eyes at the naïveté of commentators whose primary criterion for regulation appears to be that it should be short. Of course no one would dispute that regulation ought to be constructed in as concise and straightforward a manner as possible in order to achieve its goal. But contemptuous references to "red tape" and the like frequently seem to me to conceal a lack of appreciation for the goal of regulation as well as the speaker's arrogance about his or her own perspective relative to that of others.
I suspect that phrasing regulatory arguments in terms of choosing whether to regulate or not is missing the point. Regulators must choose which interests to protect, and to what degree. In the context of the data-collection matters to which Berners-Lee apparently referred, for example, regulators must determine the degree which to protect the interests of private citizens by restraining the behaviour of data-collecting entities like Internet companies and government departments, and the degree to which protect the interests of data collectors by allowing them to collect and use data as they please. To say that there is some celestial state of nature with which regulators should not interfere is at best incoherent, and at worst lazy capitulation to the interests of the most powerful.
Of course technology companies would prefer that regulators favour their interests, as The Register lampoons in a recent article on Google's opinion of government data collection. Technology enthusiasts, for whom the computer industry is of unique importance, happily tag along with demands that Internet service providers be free from regulation (such as the recent SOPA legislation in the US) that might protect the interests of other industries or of governments — except, of course, when the same service providers want to do something that would impinge on the enthusiasts' own interests, such as harvest their personal data or cap the amount that they can download for their monthly service fee.
I don't envy the job of regulators. I don't imagine they receive many thank-you letters from regulatees, expressing gratitude for the legislation to which the regulatees are subject. Perhaps a regulator in the mould of Dilbert's Wally would welcome the demand to do as little as possible, but who'd want Wally on their staff?
Terrorist, freedom fighter, or hacktivist?
I read a collection of articles today confirming that the traditional hacker ethic isn't quite as dead as I might have thought it to be. Firstly, David Glance tells the story of a Canadian student "expelled for idealistically point out security flaws" in The Conversation, while The Register ran articles on the prosecution of Aaron Swartz for fraudulently obtaining access to scientific articles on JSTOR and an attack on the University of Western Sydney criticising its decision to purchase iPads for its student population.
The recent Whitehaven Coal Hoax spawned a lot of comment on civil disobedience vs vigilantism in Australia. I don't think I've seen the same terminology applied to computer-based protest — "hacktivism" seems to be the preferred term — but it's surely much the same issue. When is defying the law nobly standing up for a cause, and when is it attempting to get one's way by force?
The cynical answer is that it's "civil disobedience" when one agrees with the political view being expressed, and "vigilantism" otherwise. More nuanced answers involve the availability of alternative methods of protest, the level and kind of harm resulting from the action, and the perpetrator's willingness to brave the ascribed punishment for his or her actions.
Hacktivism typically seems to me to fail most or all of the above tests, starting with the cynical one. The "rights" championed by the hacker ethic are frequently of little interest to anyone other than computing experts, and some of them would come at the expense of other people and industries (such as free access to private and commercial information). Of course computing experts might have genuine rights that are particular to their profession, but is anyone outside the hacker community impressed by the vandalisation of web sites, much less see the need to establish a right to it?
The Western liberal democracies that hosted all of the events listed in the first paragraph of this entry, and many others besides, provide numerous avenues through which people can make known their opinion on iPads, open access and just about anything else without needing to resort to fraud and the commandeering of other people's computer equipment. Librarians and academics, for example, are already making significant strides towards open access to scientific literature, so what need is there for vigilantism? Sure, your opinion may not be as well-known or as influential as you'd like it to be, but just about everyone else would probably say the same thing.
Gaming, on and off the armchair
Andy Ruddock's article on violent computer games on The Conversation last week mentions Henry Jenkins' opinion that "the trouble with most gaming violence ... was that it was boring" following the Columbine Massacre in the US in 1999.
Having myself tired of yet another first-person shooter around the same time, I'm inclined to agree with Jenkins. To judge by the popularity of games like World of Warcraft and the endless stream of blowing-stuff-up that appears in games reviews in APC Magazine and the conversations of my gaming acquaintances, however, millions of gamers disagree.
Reading through said games reviews, and enduring such conversations, it's easy for scholarly types to dismiss computer games as the most repetitive and unimaginitive form of art ever devised. It being cricket season in Australia, however, reminded me that games like cricket, baseball and various forms of football have been played by more or less the same rules for 150 years or so, and yet people (including me) still find them interesting to both play and watch.
So why shouldn't computer games have the same longevity? If playing a constantly-evolving roster of opponents at cricket and football can keep us entertained for 150 years, why not a constantly-evolving roster of computerised space aliens, fantastical creatures, and terrorists?
Some classic computer games may conceivably have this sort of longevity: I still think fondly of games like Pacman, Tetris and Bubble Bobble long after I lost interest in Doom, Quake and all their clones. Jenkins' and my complaints of repetitive violence might just be symptoms of Theodore Sturgeon's classic observation that "ninety percent of everything is crap" — it's not like every film, book or piece of music released is a masterpiece of inspiration and originality, either.
Game enthusiasts of the 1990s and 2000s often seemed to me to be pre-occupied with the quality of sound and graphics, rather like cricketers being pre-occupied with the construction of bats and balls. A visit to the International Cricket Hall of Fame (formerly the Bradman Museum of Cricket) earlier this week, however, reminded me that the rules and equipment used in cricket developed for a century or more before what we now recognise as the first test match in 1877. In a hundred years' time, will we look back on gamers of the 1990s in the same way we look back on those who experimented with the lengths of pitches, the construction of bats, and styles of bowling in the 1800's?
On near-replacements for navigational skills
Over the past couple of months, I've come across a few stories of mis-adventures with maps. The first involved a man who blamed his GPS for guiding him to the wrong side of the road. The second involved the discovery that an island appearing on several maps in the Coral Sea does not appear to exist. The third involved "Apple Map Disasters" reported in the February 2013 edition of APC Magazine (p. 15). The first two of these stories amazed me for different, but perhaps related, reasons, while the third provides something of an explanation.
The driver involved in the wrong-side-of-the-road episode presumably allowed his technological assistance to over-ride his pre-GPS-navigator skills of reading road signs and following road markings. One or two of the commentators in the story also blame "distraction", which I also believe to be a hallmark of poor user interfaces. Either way, technology has frustrated a skill possessed by any competent driver. >
An unnamed APC staff member seemes to have suffered a similar lapse when Apple Maps' guidance led him to lug his equipment for ten minutes in the wrong direction down a street. On any ordinary Australian street, a simple glance at the street numbers would have told him the correct direction in which to go. Here, indeed, seems to be a pair of cases in which technology has made us stupid by causing its users to overlook their own skills in favour of technology that is not, in fact, adequate to replace them.
The existence or not of obscure islands sounds like a problem out of the seventeenth century, except that we now have Google Maps to blame. The Sydney Morning Herald, which seems to have broken the story, made much of the fact that Google Maps records a "Sandy Island" in the Coral Sea that could not be found by a recent scientific expedition. The story was consequently picked up as "IT news" by The Register and IEEE Spectrum. Shaun Higgins of the Auckland Museum (among others), however, points out that the supposed island pre-dates Google Maps, and, indeed, any computerised mapping system. It seems that Google Maps was simply repeating an error made by cartographers for a hundred years or more, yet news outlets interpreted the whole thing as an "IT glitch". (I should point out that all is not lost: the Sydney Morning Herald itself followed up with Shaun Higgins' explanation, and numerous commenters on The Register offered plausible suggestions on how the error might have come about without Google's intervention.)
APC quotes an explanation of Apple Maps' problems given by Mike Dobson. Apple, he thinks, relied on computerised quality-assurance algorithms without any human oversight to check that the algorithms themselves were correct. News outlets presuming Google Maps to be the source of all cartographic knowledge, I think, risk falling into a similar trap.
Ordinary users, I suppose, could arguably be forgiven for presuming that the products of big-name companies like Google, Apple and in-car navigation manufacturers meet certain standards of quality. Yet we all know that technology makers are fallible, and that even a device that performs one task well might not perform a related one at all. Perhaps "trust, but verify" would be better advice?
