Digital media and the choice of walls
I felt that I ought to have something supportive to say when I read Andrea Carson's article on paywalls for The Conversation a couple of weeks ago. I've tended to think of the word "paywall" as a kind of swear word used by free-content advocates to disparage those with the gumption to charge for their work, but authors like Carson seem to have taken it on-board as the standard technical term for enforcing a paid subscription. For Carson, a paywall is a legitimate business model by which quality journalism might be funded. (I still prefer to just say "subscription" myself, though.)
Free content and open source advocates, on the other hand, insist that the world can and should provide quality content for free. There are, indeed, cases in which such content is made available for free, for various reasons. Michael Brown's comment on Carson's article, however, points to one elephant in the room: significant amounts of free content is funded by the public. It is not ultimately free, but funded from tax revenue.
Carson mentions what seems to be another elephant in the room inhabited by free-content advocates, but I didn't think much about it until I read David Waller's more recent article on advertising. Waller reviews a recent book by Joseph Jaffe and Maarten Albarda claiming that organisations can reduce their advertising budget to zero by using charismatic mouthpieces, customer relationship management and social media. The last one bemused me since social media is itself typically supported by advertising, and I find it hard to get excited about ad-supported advertising. Yet advertising is the main, and maybe only, alternative for private content providers looking to meet their costs without the dreaded paywall. Would open content sound so noble if we re-badged it, truthfully, as "ad-enabled content" or its mechanism as an "adwall"?
In one section of The Happy Economist (2010), Ross Gittins points out that it is tremendously arrogant to insist that one's own interests or activities transcend economics. It sounds noble enough to say that one's work cannot be reduced to monetary terms, but economists know that the real question is not about money but about the distribution and use of resources. (Though many of economists' biggest fans do not seem to be very good at explaining this.) Getting back to paying for content, the real question is not whether or not it should free, but what is the most effective way of resourcing it? Do we resource it by public funding, by subscriptions, by advertising, by charity, or by something else? Or, do we not resource it at all and allow it to wither because we'd actually rather use our resources on something else?
Shadowy citizens vs shadowy governments
I've been reading quite a bit about Bitcoin and other anonymisation technologies over the past week or so, partly driven by the recent shut down of an anonymous marketplace known as Silk Road. David Glance has a bit to say about Bitcoin, Silk Road and Liberty Reserve on The Conversation, while Jonathon Levin discusses possible directions for Bitcoin and Nigel Phair ponders likely replacements for Silk Road in the same venue. G. Pascal Zachary comes at similar issues from the point of view of surveillance in the October 2013 issue of IEEE Spectrum (p. 8).
Levin opens with a statement about Bitcoin enthusiasts and libertarians being confused by the slow take-up of what, to them, is a tremendous advance in anonymity and freedom from Big Bad Government. I don't know which, if any, specific libertarians are being referred to by Levin, but Levin's statement certainly seems consistent with traditional cyberlibertarian thinking that anonymity and secrecy is the path to the protection of rights and freedom.
Non-libertarians, of course, probably think more like Nigel Phair and G. Pascal Zachary, who accept that there are certain behaviours deemed to be illegal for good reason, and that law enforcement agencies must therefore have some sort of power to detect and arrest those who engage in those behaviours. Assuming that the non-libertarians aren't doing any of these illegal things themselves, they perceive somewhat less need for anonymity. For that matter, even libertarians agree that the state should enforce property rights and contracts, and one wonders if even they would be pleased with a technology that allowed anonymous miscreants to steal property and dishonour contracts.
Anti-surveillance commentators love to mock the surveillers' defence that "you've got nothing to worry about if you're not doing anything wrong", but the surveillers may be perfectly correct if they're referring to what the surveillers consider wrong. Why waste time persecuting behaviour with which one has no problem, after all? The problem is, not everyone agrees with the surveillers' vision of wrongness, and anti-surveillers fear persecution for behaviours that they consider acceptable, but which the surveillers consider wrong.
The dealing of drugs, identities and violence alleged to be taking place on Silk Road and its like probably doesn't do much for the anti-surveillers' case. Apparently Silk Road users really do have something to hide under the law of most countries, and I doubt many people are shedding a tear for those poor old criminal gangs who've just lost one of their meeting places.
Hal Berghel's take on PRISM in the July 2013 issue of IEEE Computer asks that politicians do not take the "trust me" approach to defending government surveillance apparatus, in which politicians ask us to trust that said apparatus is only being used to apprehend genuine criminals. Simply hearing "trust me" is certainly dissatisfying. Said politicians need to prove their trustworthiness by demonstrating that, if you're not doing anything wrong, you really do have nothing to fear. But anti-surveillers have a similar problem: why accept a statement of "trust us" from a shadowy on-line marketplace any more than a statement of "trust us" from a shadowy government department?
What's a STEM crisis?
I've recently been reading a bit about a possible "STEM crisis", or lack of one, mostly in IEEE Spectrum, but also on The Conversation. "STEM" is an acronym for "Science, Technology, Engineering and Mathematics", and the crisis, if it exists, is supposed to be caused by a shortage of graduates in STEM disciplines.
The disputants seem to me to be asking two somewhat different questions. STEM enthusiasts like professional societies and chief scientists start with the assumption that STEM is a good thing that we should be doing more of, and argue that we should therefore have more STEM graduates to do it. Economists and out-of-work STEM graduates start with the observation that there are already numerous un- and under-employed STEM graduates, and argue that we should therefore have less of them.
These two views are perfectly consistent if one accepts that we, as a society, ought to be doing more STEM. If so, the enthusiasts are really saying that there is a crisis in the amount of STEM being undertaken. STEM graduates experience this crisis as an inability to find work.
How much STEM should we be doing? In the Conversation article cited above, Andrew Norton assumes that we should be doing exactly that STEM for which buyers are willing to pay (manifested in Norton's article by how many STEM graduates employers are willing to hire). Taken at face value, this is more or less the answer one gets from basic free market economics: if doing some STEM gets the greatest value out of all the ways buyers could use the resources involved, the buyers will pay for that STEM. If doing something else with those resources gives the buyers a greater benefit, the buyers will do something else.
I think that most people, however, would agree that a significant proportion of STEM has the character of what economists call a "public good". Public goods are items like defence forces and street lighting for which is difficult or impossible to charge people according to their individual use. Markets may under-invest in public goods since would-be investors can't extract payment for them even though buyers exist who would actually use them.
Norton implicitly assumes that the government has estimated the value of public STEM and invested a suitable amount of tax money into it, creating a matching demand for STEM graduates in government-funded programmes. I suspect that the enthusiasts, however, place more or less infinite value on STEM. For them, there will always be a "STEM crisis" because no amount of government or industry investment can ever realise such a value.
The rise/fall/whatever-you-call-it of civilisation
I read a couple of articles this week that, without being specifically directed at technological optimists, seemed at odds with the technology-is-advancing-faster-than-ever-before narrative that I've become accustomed to in publications like IEEE Spectrum. The Australian (18 September 2013, p. 29) had Peter Murphy contending that "big ideas in art and science seem like a thing of the past", while Radio National had Ed Finn lamenting that current science fiction typically portrays a pretty grim future.
Peter Murphy sounds like the kind of person who contributes to a narrative that I once saw described (I forget where) as "civilisation has been declining since it started". For him, the good old days were left behind somewhere in the middle of the twentieth century, and we no longer have anything interesting to say. Ed Finn is not such a curmudgeon himself, but draws attention to the trend from the largely utopian science fiction of the mid-twentieth century to the dystopian sort now enjoying popularity. Finn himself proposes to encourage more inspirational science fiction through a programme known as Project Hieroglyph. I presume that neither of them have been reading IEEE publications, Ray Kurzweil, Kevin Kelly, or any of their ilk, for whom things are (mostly) quite the opposite.
I was struck by the degree to which Murphy's article used the same technique as that used by more euphoric views of technology, however much their conclusions might differ: make a set of assertions about the importance of certain artworks or technologies that are at best subjective and at worst arbitrary, then conclude with whether you liked the older ones or newer ones better. Whether things are getting better or worse thus seems to depend largely on whether you prefer Daniel Defoe or Stephen King, or whether you happen to see more ploughs or iPhones.
The most convincing analysis of this sort that I've encountered is the one in Jaron Lanier's You Are Not a Gadget (2010). Writing about music, he argues that no new genres of music have appeared since hip hop in the 1980's, and that no one could tell whether a pop song that came out in the past twenty years was released in the 1990s or the 2000s. In another section, he argues that open source software consists largely of clones of prior commercial software. I'm sure there are plenty of musicians and open source software developers who might argue otherwise, but Lanier's points are at least testable hypotheses.
Given that the importance of any particular technology or piece of art is so subjective I'm not sure it's really very meaningful at all to make sweeping statements about whether art or technology is getting better or worse, or faster or slower. The Lord of the Rings, for example, has been immensely influential for generations of fantasy writers and readers, but I don't imagine it means much to writers and readers of, say, romantic comedies. There might nonetheless be more specific statements that could be made, but they need much more robust than merely making a list of what one individual likes and doesn't like.
Social experience machines
While expanding my thoughts on synthetic worlds for The Social Interface, I made a connection between Edward Castronova's concept of migration to synthetic worlds, and Robert Nozick's experience machine. Nozick postulates a machine able to give its user any experience he or she desired, but argues that no one would actually want to live in such a machine. Therefore, he argues, people do not subscribe to the utilitarian notion that we care only about the pain and pleasure we experience.
It's important for Nozick's argument that potential users of the experience machine are aware that it simulates experiences, since he argues that potential users would find this simulation dissatisfying irrespective of how good the experiences were. Castronova's synthetic worlds satisfy this criterion since their users are aware of entering and leaving their worlds, and this would be the case even if virtual reality technology advanced to the point that it could provide perfectly realistic experiences.
Assuming that Nozick is correct about a fully-informed person wanting to live in an experience machine, the question remains as to what might happen were someone to enter an experience machine without knowing it. Fully-functioning experience machines don't exist, but I think an argument can be made that certain aspects of them do. Would a person tricked into entering one feel cheated?
During the discussion that led to my dangerous idea last week, one of my colleagues observed that it felt rewarding to accept connection requests, and rude to decline them. I countered that this was exactly why I'd deleted my LinkedIn profile: it seemed superficially rewarding to accept connection requests, and at first I thought they might lead to something, but this quickly turned to disappointment when I realised that I wasn't actually connected to these people in any meaningful way, and it never led to anything.
For me, LinkedIn was a primitive experience machine that (momentarily) provided the experience of being connected. As Nozick predicted, I got myself out of it once I'd decided that the experience was, in fact, simulated. As Sherry Turkle puts it, it promised friendship but delivered only a performance — and a particularly crude one at that.
I suppose that people who use LinkedIn and other networks might contend that that was my particular experience, that they have built genuine connections with it, and that maybe I wasn't using the tool correctly in order to benefit from it. Or maybe it's just not my thing, in the same way that stamp-collecting and dog ownership aren't my thing.
This all sounds plausible enough, and I can neither prove nor disprove it. When pressed, I guess I find the "not my thing" explanation most convincing. Going back to experience machines, though, I only felt cheated once I'd compared the LinkedIn experience with my physical world experience. If I were still in LinkedIn's experience machine, and ignorant of the physical world, might I not be as happy as everyone else in that machine?
