The predictable top ten predictions of 2013
Around this time of year, many commentators on technology (and probably other things) like to offer their "top ten predictions" for the coming year. I've recently skimmed through IEEE Spectrum's 2013 Tech to Watch and the top ten tech predictions of The Conversation's Mark Gregory.
I say "skimmed through" because I'm doubtful that it would be worth my time to examine such predictions in detail. For a start, it isn't clear to me how the commentators define "top ten". Do they mean their ten most confident predictions? Or, since this criterion would result in unhelpful predictions like "computers will represent numbers in binary", maybe they mean their ten most confident predictions of what will change? Or, do they mean the ten most profitable technologies? Or the ten most influential? Or the ten most interesting (which is surely subjective)?
I recently read Duncan Watts' Everything is Obvious, which, among many other things, makes the point that commentators making these sorts of predictions are rarely held to account for what they say. Commentators set out their predictions for the year in December or January, but, so far as I can tell, they're largely forgotten come February. The predictions have no obvious consequences for either their makers or their users, and, indeed, seem to amply satisfy Harry Frankfurt's rigourous definition of "bullshit" as speech made without any concern as to whether it is true or not.
Watts observes that, not only is it difficult to predict the fate of current trends, we don't even know what to predict. Perhaps, in this case, Spectrum's contributors can make some informed guesses about electric vehicles or computer displays that they happen to have heard about, but numerous technologies of the future are likely being developed in currently unheard-of lab experiments or software development houses where neither the IEEE nor anyone else knows to look.
To be fair to Spectrum, I don't think the editors necessarily mean to make any grand statements about what technology will or won't be popular or profitable or influential, but only to draw the reader's attention to some technology that the editors think is interesting. The writers do acknowledge the doubters and pitfalls of technology like Google Glasses, for example. I wonder if they and their fellow prognosticators ought to dispense with the "top ten 10" and the "predictions", and use a more modest "ten interesting things"? After all, that's what the editors do effectively when they put together an ordinary issue of Spectrum or The Conversation.
The University of Western Sydney set to deploy black boxes
The University of Western Sydney ("UWS") recently announced that it would give all new students an iPad. Numerous commentators on The Conversation and elsewhere have — probably rightly, in my view — panned the initiative as an example of marketing over substance.
UWS' own information on the initiative provides a vague assurance that "the iPad initiative will assist academic staff in the delivery of cutting edge learning and teaching." The concrete examples that follow are limited to online lectures and library services, which have been available for a decade or more at universities around the world, and work fine with devices that existed long before the iPad.
The Conversation quotes one Phillip Dawson observing that the iPad may help bridge the "digital divide" (though he thinks it is an expensive option). I can certainly see a lot of sense in providing facilities that mean that students, no matter what their background, are able to participate in their courses and complete the work required by them. UWS, however, seems to have fallen victim to the black box fallacy in thinking that iPads are the solution for all courses. Given that much university work involves writing essays, doing mathematics and (in the courses that I teach) writing computer programs, what are students expected to do with a device without a keyboard?
Dawson goes on to observe that students can expect "this sort of technology will be an integral part of the learning experience at UWS", which seems consistent with UWS' own announcements as well as the comments of Simon Pyke on a similar initiative at the University of Adelaide. If so, I pity the academics at UWS (and the University of Adelaide) who I suppose are being asked to teach to the technology instead of being offered the technology that best supports their teaching. I fear to write what I would think if someone told me that I had to teach programming using an iPad, which I understand to have no keyboard, no compiler, and no ability to run programs until they have been approved by Apple.
I'm pretty sure that Apple will be the biggest winner out of UWS' purchase. Apple will sell thousands of devices, and add UWS' imprimatur to its educational credentials. Maybe the students will get a piece of equipment with some value as a content delivery and communications tool, but to what are they going to turn when they want to practice the critical thinking, scientific skills, art and communication skills that they actually came to university to develop?
Checking facts and faking expertise
Jason Lodge recently asked on The Conversation: is technology making as stupid?.
Of course this depends somewhat on what one considers to be "stupid". As Sue Ieraci's comment observes, "every generation appears to value its own ways of knowing and relating above those of the generations above and below." Lodge's article starts with whether or not rote learning has been displaced by ready access to sources of information such as Google. If so, we might be becoming "stupid" insofar as intelligence is measured by an ability to remember facts.
I, and probably Jason Lodge also, would be surprised if anyone still considered rote learning to be the pinnacle of "intelligence". Well before the World Wide Web even existed, there was far more information in the world than any one person could be expected to remember, and how many teachers these days would consider their students to be "intelligent" merely for copying something into an essay or computer program? Modern educators therefore prize skills like knowing how to find information, determining whether or not it is reliable, and synthesising it into a coherent response to a question.
I think that being able to recall a certain breadth of factual information is nonetheless useful: imagine that you had to resort to a dictionary to look up the spelling and meaning of every noun you came across! And imagine what a teacher I would be if I had to look up the textbook every time a student asked a question!
I suppose that knowing what needs to be remembered, and what can be left for looking up, is a skill of its own. A Java programmer who can remember the difference between "int", "double" and "String" is surely going to be far more productive than one who can't, for example, but it's probably safe for the same programmer to know that he or she can look up the documentation should he or she ever need to parse hexadecimal numbers using the java.util.Scanner class.When giving advice about presentations to my research students, I often advise them that they ought to be able to talk knowledgeably about their subject without having to look everything up as they go. The title of the article aside, I guess Lodge is really asking whether or not technology has made us complacent about what constitutes "knowledgeable". Has ready access to search engines and the like, he asks, made us imagine we are experts in subjects that we can't actually talk about except insofar as we can look them up?
Extreme prophecy and unsubtle predictions
I've recently read a couple of smug comments from technology enthusiasts lambasting what they perceive as Luddism from sceptics of some recent technological adventure. One comment on The Conversation equated doubters of massive on-line open courses with a newspaper executive insisting that people would never want to read classifieds anywhere other than a printed newspaper. And, having worked in copyright protection for many years, I'm well-acquainted with the hacker triumphalism that follows the breaking of some rights management scheme.
The technology enthusiasts involved are, of course, cherry-picking the failed predictions of their opponents (or, in the case of the comment cited above, describing a caricature that probably doesn't represent tbe opinion held by any actual person). The Register, for example, recently described ten technology fails illustrating that technology enthusiasts can be just as mistaken in their views of the future as anyone else.
I guess I'm pre-disposed to doubt apocalyptic predictions like the notions that universities will be replaced by massive on-line open courses or that iPhones have already brought about a revolution. Aside from the fact that I'm yet to experience any such apocalypse despite the numerous technological changes that have occurred over the course of my life, extreme predictions of this sort are inevitably simplistic.
For one, the world is vastly bigger than any single technology or product, and the changes brought about by any one are always going to tempered by numerous other influences. So the iPhone is a very successful mobile computing product: but what did it do for refrigeration, power generation or surgery?
For another, existing institutions don't just sit back and wait for their demise when a new technology comes along, even if technology enthusiasts would rather that they did. The music and newspaper industries, for example, may have struggled to re-organise their businesses around electronic media, but they never just packed up and walked away, and they continue to try things even now. And far from planning to either shut down their universities in the face of massive on-line open courses, or pretend that such things don't exist, vice-chancellors Ed Byrne and Margaret Gardner, offer some more measured thoughts about how existing universities might work with on-line courses.
Publishing executives, vice-chancellors and others in their position probably won't be correct in every detail — but could they be as wrong as a prediction that technology X will overwhelm everythng?
Re-inventing the wheel, I mean, human
I read David F. Dufty's Lost in Transit: The Strange Story of the Philip K Dick Android over the weekend, whose subject matter is plainly described by its sub-title. I found the book informative and entertaining in its own right, but reading about androids also reminded me of a rhetorical question I asked in the first entry in this blog: what is the purpose of creating human-like artificial intelligence when we have seven thousand million human intelligences already?
I, and most other academics, could probably produce of a long list of intellectual reasons for such a pursuit, ranging from better understanding of how humans interact with other animate objects to illuminating the concept of "intelligence". But some of the folks in Dufty's book (and elsewhere) clearly think that human-like artificial intelligences have more immediate practical uses.
David Hanson, the sculptor who championed the project and built the android's head, argues that "when we interact with things in our environment we interact more naturally, and form more natural relationships, with things that look like us" (p. 73). The surrounding text suggests that Hanson was also thinking about the more academic reasons outlined above, but I think the assumption in this quote deserves some scrutiny.
An essay that I read some years ago, and whose citation I now forget, disputed this kind of thinking using the example of cars. Nearly everyone can learn to drive a car, and we think nothing of it once we've got our licence. This is not, the essay points out, because cars look or behave anything like people, but because they have an interface suited to the task of controlling a motorised vehicle. Why expect that computers (or robots) should be any different?
Hanson might be correct in surmising that we interact more naturally with devices that look like us, at least in the sense that such interaction requires no skill beyond ones we acquire informally at a very early age — though I suspect that many people (Sherry Turkle, for one) would consider the concept of a natural relationship with an artificial being to be an oxymoron. But that's not to say that a hammer or refrigerator, for example, would necessarily be easier to use if it looked like a person.
I think Hanson means to imply that human-like interfaces are worth pursuing because they seem likely to be the most appropriate ones in at least some situations. I'm not yet convinced that human-like interfaces are the best way of interacting with anything other than humans, but maybe that's because I don't have any particular uses for androids.
