I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard

Where is the Thomas Piketty of art?

2015-03-21 by Nick S., tagged as commerce, digital media

The discussion between Mike Swinbourne and Peter Wilkin in response to my recent Conversation article on business models for the creative industry reminded me that the excessive profits alleged to be made by powerful industry players is the go-to argument for many critics of copyright. Yet the creative industries are hardly the only ones in which a small number of powerful figures are able to obtain much greater wealth than the average member of society, so are excessive profits in the creative industries really a result of copyright at all, or are they just the usual dynamic of the economic system that we have?

I decided to do some reading on this question, but struggled to find any actual study of this issue or a related one. Every now and again someone mentions the distribution of revenue obtained from the sale of a CD (G. Prem Premkumar's was the best I found) but no one seems to have compiled similar figures for on-line purchases, films or books. Nor does anyone appear to have investigated the overall distribution of revenue amongst artists, publishers, distributors and the rest, which is closer to what I wanted to know. Either my literature search skills have failed me, or I have a project for an economics student looking to become the Thomas Piketty of art.

Whether or not their profits are fair overall, large corporations are intimately involved in the creation and distribution of the art most discussed in debates over copyright and infringement. For all the vitriol directed at major record labels and movie studios, I can only recall one person (in a comment on The Conversation) professing that he'd rather do without their products. For all other critics of the creative industries, there's no question that Game of Thrones, Mad Max IV, The Lego Movie and the rest ought to be available, and it's hard to see how anyone but a large corporation or exceptionally wealthy individual could put together something that requires the many millions of dollars that go into creating such blockbusters.

Shana Ponelis and Johannes Britz, discussing the ethics of copyright infringement, point out that infringers can ease their consciences with the perception that the victims are giant corporations who no one professes to like and who can well afford the loss of one measly song or film. Any losses to the artists employed or contracted by these corporations are quietly ignored (as are the copyright users who do pay, covering the cost of creating the work that infringers take for nothing). But, in doing so, are infringement apologists forgetting who bought them the goodies at the centre of the debate?

I can't say I find giant corporations particularly lovable, either, and I try to favour small businesses in my neighbourhood as much as possible for things like groceries, services and entertainment. But I have to concede that giant corporations have their uses — for a start, how many of our digital entertainment devices were lovingly crafted in the rustic workshop of an independent electronics artisan? Until someone gets around to examining the distribution of wealth in the creative industries, however, it's hard to say how they might be best used and rewarded.

Bringing your own device, or providing it for someone else?

2015-03-05 by Nick S., tagged as employment, mobile computing

When I touched upon bring-your-own-device schemes in an article about upgrading devices last month, my inner industrial relations consultant was a little troubled by the whole idea of bring-your-own-device: why would I provide equipment for my employer's use at my own expense? I then read Brian M. Gaff's column BYOD? OMG! in the February 2015 issue of IEEE Computer (pp. 10-11), in which he provides some advice for employers in managing devices brought into the workplace by employees.

In doing so, Gaff sometimes makes said employees sound very much indeed like suckers providing free equipment to their employers and donating time outside of work hours: BYOD transfers costs from employers to employees, and increases productivity (per dollar, if not per time) by allowing employees to work at home. Reading that "personally owned devices are typically more advanced compared to those that are employer issued" (p. 10), I further envisaged a workplace version of John Kenneth Galbraith's "private opulence and public squalor" in which BYOD participants flaunt their cool new devices while the workplace's own infrastructure is left to rot.

To be fair, there are benefits in it for employees as well. They get to use devices set up to their own specifications, and there's some convenience in not having to switch between personal devices and work ones. I myself frequently answer work e-mail from my home computer (though that says as much about the casual nature of my employment as anything to do with which computer I like to use.) Maybe one could even make an environmental case for the practice insofar as it reduces the number of devices that need to be built (though the perpetual upgrade cycle that feeds BYOD enthusiasm may have exactly the opposite effect.)

Apparently pretty much everyone thinks this is all more than fair, because a quick search for "bring your own device" on both Google and Bing fails to bring up anyone complaining about employers transferring costs to employees. Indeed, if Gaff, Google and Bing are to be believed, employers can barely stop employees from bringing their beloved devices to work.

Still, it's not clear to me whether BYOD enthusiasts have consciously rejected any concern over who pays for work to be done, or if they have in fact forgotten to ask the question in their rush to use a favourite device. Even I wouldn't reject BYOD outright over the concern I've noted above — but I would want to be sure that I'm not just providing technology procurement services as a free add-on to my normal duties.

Who will stand up for advertising?

2015-03-01 by Nick S., tagged as commerce

The Conversaton published my quick overview of business models for the creative industries in the latter weeks of February. I shortly had to laugh at Peter Wilkin's sarcastic suggestion that ads be embedded into songs in response to a copyright-is-futile comment from Mike Swinbourne. Wilkin's tongue-in-cheek hope is that we'll get so sick of the results that we'll go back to paying for music.

I've previously written about the way that the need for advertising money drives data collection machinery like that of Google and Facebook. Wilkin's suggestion at least has the virtue that music listeners would be forced to confront the results of their unwillingness to pay.

Advertising has historically been very successful in raising funds for free-to-air radio and television, print news and search engines, but I'm hard-pressed to think of anyone praising it for its support of great art. At best, people accept it as a necessary evil to be endured in return for regular entertainment. (I presume that marketers themselves have more positive opinions but I don't read the publications in which they offer their views.)

Perhaps those who suggest advertising as the solution for raising funds ought to make the case for its greatness, if this is what they really believe. The apparent lack of any such case leaves me wondering if they really appreciate advertising's contribution, or if they're just being seduced by apparently-free content.

Having had a life-long hatred of advertising, I decided some time ago to make a point of taking the paid version rather than the ad-supported version whenever the former was available. So I pay for someone to host this web site and my e-mail accounts, I pay subscriptions to web sites that I read regularly, and I pay for shareware that I find useful.

This policy has its limits, particularly when it comes to sites or applications that I use only casually. Without an efficient infrastructure and user interface for making small electronic payments, it's difficult for creators to offer paid-for access to small items at a reasonable cost. Perhaps here I'd concede the value of advertising, if only because twenty-odd years of work on "micro-payments" has amounted to more or less nothing.

I wonder if those who support advertising as a solution more generally would be prepared to acknowledge a similar policy. Would anyone commit to a policy of not fast-forwarding through ads on video, not installing pop-up blockers, and not subverting data collection machinery by entering fake data? And is anyone (who is not themselves a marketer) prepared to write an ode to the wonders that advertising has brought us?

Do super-intelligent machines have a purpose and is it a good one?

2015-02-26 by Nick S., tagged as artificial intelligence

Over the past month, I happened to read a few books in which machine intelligence plays a big part, being Nicholas Agar's Humanity's End (2010), Frank Pasquale's The Black Box Society (2015) and Tyler Cowen's Average is Over (2013).

Cowen is by far the most sanguine, if only because he takes a firmly amoral view that only an economist could love. He presents as inevitable a future of super-intelligent calculating machines tended to by a few elite humans able to work with them, while the remaining workforce finds itself of little value. Agar, on the other hand, doubts that augmenting humans beyond their natural abilities has any real benefits, and Pasquale fears that the secret algorithms behind search engines, computer trading and the like will stymie the public's understanding and control of the information that is presented to them.

While there are many small points on which I find Cowen's logic impenetrable, I did appreciate his characterisation of super-intelligent machines. Rather than have a human-like intelligence appear fully-formed at some choice moment as it does in so much science fiction, he sees machine intelligence emerging gradually and appearing alien and unintelligible to human intelligence. If it takes eighteen years for a human to become fully developed in the legal sense, why expect that a machine — especially the first one ever built, presumably the most primitive of its kind — could achieve the same immediately upon being switched on? And why expect a computer to behave like a human when it is an entirely different sort of construction?

Agar points out that, if the behaviour of super-intelligent machines is incomprehensible to us, what interest would we have in anything they do? Cowen observes that few people are interested in watching computers play chess against each other, precisely because human watchers don't understand what the computer players are doing. Yet, if machine intelligence emerges gradually, at what point might we decide to stop because we're no longer interested?

Pasquale suggests a more sinister possibility. How do we know that secret or incomprehensible behaviour is in our best interests? I'm sure plenty of people would regard Cowen's world as dystopian without any further elaboration, and it's easy to think up even worse dystopias in which the elite (Google et al. in Pasquale's book) enrich themselves while keeping everyone else ignorant of the real state of affairs, or in which machines become trapped in an echo chamber processing only data created or influenced by themselves.

Cowen seems to be confident that his super-intelligent machines will be able to get good results even if we don't understand why, citing examples like the ability to win chess games and match successful romantic partners without any human being able to understand how they made their decisions. For problems with narrow and well-defined goals — like winning games and, at least to a crude approximation, marriage — it's easy to verify that a solution is correct even if we don't know how the solution was arrived at. But computers are already superb for narrow and well-defined goals, and no one would suggest that we allow them to rule over us because such goals are only crude approximations of what we actually want.

Pasquale's solution is to expose the algorithms to scrutiny. Perhaps no human could follow the detailed execution of an algorithm because a human cannot keep track of so many variables so quickly as computer can. But we must understand the algorithms on some level in order to build them in the first place, and to judge whether or not they are good algorithms. And if we can't judge whether or not the algorithms are good, what is our purpose in creating them?

Should universities lead or follow technological trends?

2015-02-20 by Nick S., tagged as education

The Australian's Higher Education section this week either presented some very strange research, or made a very strange presentation of some research, in claiming that Twitter is the least used online resource (18 February 2015, p. 30). (Less used than www.nps.id.au, are you sure? I laughed.) The article doesn't clearly identify the study alleged to have discovered this and I wasn't able to find it via a search engine, so I can only go by the article's presentation here.

As the article has it, Twitter is "the social media platform of choice for academics, journalists and a host of other professionals" but "barely rates as an educational tool". This is based on a survey showing that only 15% of participating students found Twitter useful in their university studies.

To my mind, the most obvious explanation for this is that Twitter just doesn't meet the needs of university education. As far as I know, it was never designed for this purpose, so it's hardly surprising that people don't use it as such. Refrigerators, say, probably get even less use in university courses and no one would anyone expect anything else given that refrigerators were never designed for educating people.

The article instead quotes the study's lead author, Neil Selwyn, speculating that the finding "could be seen as a negative for universities [since Twitter] is where the technological generations are having conversations and finding stuff out." The underlying assumption seems be to that the Cool Kids are using Twitter, and universities might not be cool if they don't use it too.

Well, students probably use refrigerators quite a bit too, but does that mean that it would be useful to have one in my classroom? If Twitter is to be accepted as an educational tool, educators need to be convinced of some educational purpose in using it. Those who do things in order to be cool are more likely to be described as "try-hards" than "innovators".

And are the Cool Kids really using Twitter anyway? According to the article, nearly all students are actually using learning management systems, on-line libraries and on-line videos — and why wouldn't they, given that all these tools have well-established educational uses? The article itself acknowledges that the students are all aware of Twitter, they just don't use it for this particular purpose. Maybe the article could just as meaningfully have read "Twitter barely rates as an educational tool, yet is the social media platform of choice for academics, journalists and a host of other professionals."