I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard

Software interpretation of human communication and its discontents

2012-10-23 by Nick S., tagged as communication, user interfaces

I found myself using Google Mail today, having joined a company that uses it as its e-mail system. Aside from my typical frustration with the bloated, slow and browser-dependent interfaces sported by modern webmail programs, I was specifically annoyed that Google Mail, by default, hides part of the e-mail that I'm working on: namely, the signatures and quoted material. As a result, I found myself reading and sending e-mail messages without being sure how they appeared on the receiver's screen.

By coincidence, I also received an e-mail today from a friend apologising for the poor formatting of her previous e-mail. Apparently it looked fine in her e-mail client, but it was garbled by other people's clients (including mine). I recently received another e-mail with a signature containing two different e-mail addresses for the sender, one obviously wrong.

My experience with Google Mail illustrates why this might have happened: the e-mail clients of the victims in the above stories presented a view of an -mail that they deemed helpful, while other people's e-mail clients presented views that those clients deemed helpful.

The views disagreed. Instead of presenting the "true" content of the e-mail, the e-mail clients involved have presented their own interpretation of it. (By "true", I mean the universally-agreed encoding of e-mail messages, being plain text or at least HTML.) And, in doing so, the e-mail clients foiled human communication.

Regarding my particular complaint, I suppose that Google Mail's developers think they are being helpful by automatically eliminating "extraneous" information like quoted messages and signatures. As I said in my previous post, I'm all for eliminating useless distractions from user interfaces. But if signatures and quotes really are useless distractions, why include them in an e-mail in the first place?

I'm part of what seems to be a dwindling minority of people who adhere to the custom of selectively quoting the e-mails to which we reply. Once upon a time, Internet users would have been appalled at the wholesale quoting of earlier e-mails that seems to be current practice. If you're not going to use it, we thought, delete it and save the space.

Now, Internet bandwidth isn't as precious as it used to be, and one might say that only hoary old nerds would cling to byte-pinching practices developed in the days of 2400 baud modems. But I nonetheless think that selective quoting serves a more human purpose: it foregrounds what is important to the communication, and eliminates what is not.

Interventionist features like Google Mail's seem to me to at once encourage lazy communicators to fill their e-mails full of junk in the expectation that their software will correct it for them, and to frustrate careful communicators by making them fight against their software in order to send the message that they want to send. The winner is poor communication.

Who's for clutter and distraction?

2012-10-13 by Nick S., tagged as user interfaces

A few weeks ago, The Register featured an article entitled Information is the UI in Windows 8, says design guru. I read the article 3-4 times and still have no idea what the eponymous design guru (Shane Morris) was on about, but the comments on the article reminded me of one or two battles I've fought and lost over user interfaces. I was reminded of this again while trying to disable a particularly annoying feature of LibreOffice the other night.

I'm a dyed-in-the-wool minimalist. I detest desktops full of icons; I fight constant battles against Windows programs that want to add themselves to prominent places on the "Start" menu, task bar and desktop; and I despise Gnome and KDE for emulating Windows. (For the record, I prefer Fluxbox.)

There is obviously a certain amount of personal preference here, but the comments on The Register's article make it clear that I'm not the only one. A few commenters mention the infamous <blink> HTML tag. When it was first developed, it presumably seemed like a good way of emphasising text. However, people rapidly discovered that blinking text distracted readers from the rest of the web page.

Google's home page is, I think, a legendary piece of web design. It replaced the complicated and distracting interfaces of Altavista, HotBot, et al., with an interface that gets right to the point of what people want from a search engine: a "search" box. Yet web designers -- apparently unaware of Google's success -- continue to stuff their sites full of links, images, Flash, Javascript and all the rest.

I don't think I've ever heard anyone say that the way to make a good user interface is to add as many buttons, sidebars, animations and other doodads as possible. So why do well-known web sites like Bigpond and NineMSN seem to do exactly this?

The obvious answer is "feature-itis", the process by which software gains features that seem good in themselves but whose over-abundance as a whole detracts from the comprehensibility and usability of the software. It's easy to say that some software or a web site should provide such-and-such a feature, and usually easy enough for a software developer to make the feature happen. But the proponent of a feature is unlikely to admit that it isn't that important and can be relegated to a second-level menu.

One of the pieces of information that I used to resolve my problem with LibreOffice illustrates this. A user of similar mind to me complained that the section title tooltip displayed by LibreOffice when scrolling through a document is "distracting and annoying". In response, Roman Eisele asserts that "some people will find this tooltip useful". So, on the strength of some unidentified people who might conceivably like this feature, Roman suggests resolving the original user's complaint with a feature that allows anti-tooltip users to disable the scrolling tooltip without disabling all tooltips (which is what I had to do in order to disable a feature that, for me, makes LibreOffice virtually unusable.)

I suppose there must be people who like interfaces that I find cluttered: plenty of computers have them. According to the November 2012 issue of APC Magazine (p. 57), Microsoft decided to remove the "Start" menu from Windows 8 in part because users "were pinning their favourite apps to the taskbar instead," which I consider to make for a very cluttered taskbar. By way of pleasing these users, the Interface Formerly Known As Metro seems to pile icons onto the screen in exactly the way that I hate most. Mind you, most of APC's writers seem to have grave doubts about it, too. I wonder if Microsoft has also checked that those users weren't just pinning icons to the taskbar because the "Start" menu itself is a bastion of uncontrollable clutter?

Tablets and the elusive black box of computing

2012-09-30 by Nick S., tagged as buzzwords, mobile computing, prediction

The Conversation last week included Roland Sussex wondering if digital tablets have become essential. His answer seems to be "no", since he observes they aren't very good for textual input and aren't sufficiently robust for use by children. He eventually comes to the rather inane conclusion that "for what they do well they are fine." For things they do badly, we still need other devices.

For my part, the answer is obviously "no" since I don't have a tablet and have yet to drop out of society, or even be inconvenienced in any way. I have a netbook that I find quite useful for reviewing lecture material and drafts while I'm on the train. Perhaps a tablet would be better for reading books and magazines (which I also do), but I do enough typing to feel that a device with a keyboard is the most appropriate tool for the job. See my comments on the article for my full argument.

I sometimes wonder if all the fuss about tablets (and cloud computing and any number of other buzzwords that have appeared over the years) is driven by a self-fulfilling prophecy in which everyone buys tablets because everyone says tablets are the way of the future. Did Steve Jobs anticipate the market for tablets when he introduced the iPad, or did the market buy tablets because a charismatic and influential figure anticipated them? After all, tablets of various sorts existed a long time before the iPad: Apple itself released the Newton in 1992, while Microsoft introduced Windows for Pen Computing in 1991 and Windows XP Tablet PC Edition in 2002.

Now, it could well be that technology just wasn't up to the task of making tablets work in the days of the Newton and Windows for Pen Computing, and advances in technology have now made it the right time to try again. Sun Microsystems and others were eager promoters of "network computing" and "thin clients" in the 1990's, for example, but the networks of the time didn't have the capacity or connectivity to support what we now call "cloud computing". With increased network capacity and connectivity, the 2010's might be a more fertile ground for similar ideas.

Whatever the case, it's hard to see us escaping from the boring old paradigm of using the right tool for the job. Henry Jenkins refers to imaginations of a single, unified computing/communications device as "the black box fallacy" in Convergence Culture. As several of the comments on Sussex's article observe, it's not that tablets lack the technological sophistication of Jenkins' black box, it's that ergonomics dictates different tools for different tasks. And even if some future tablet -- or smartphone or wearable computer or microchip implant -- could somehow meet all of one's computing and communications needs, would it make a very good refrigerator?

Another step sideways for electronic cash

2012-09-13 by Nick S., tagged as commerce, mobile computing, prediction

In his contribution to the September 2012 issue of IEEE Computer Magazine, Neal Leavitt asks Are Mobile [Computer] Payments Ready to Cash In Yet?

A few years ago, I participated in a workshop on future technologies during which my discussion group was asked to answer the question "When will cash be replaced by electronic payments?" I, and a fellow group member whose name I forget, quickly gave the answer "Never". Of course "never" is a long time, and history is well-supplied with infamous statements that heavier-than-air vehicles would never fly, that the world had a market for about five computers, and that home users wouldn't need more than 640 kilobytes of memory.

Nonetheless, I think Leavitt (probably unconsciously) tells us something in his summary of why mobile payments have not yet superseded cash. According to his "industry observers", such payments are hampered by "inadequate security, a lack of standards, and limited interoperability between systems".

I'm sure these are all genuine difficulties with mobile payments, but the industry observers seem to have forgotten to ask: in what way do mobile payments improve upon cash? Why switch to payment by mobile computer when cash, EFTPOS and credit cards already provide effective and convenient methods of portable payment? (The mobile computing industry presumably considers getting a cut of the payments to be reason enough.)

Towards the end of the article, Leavitt summarises the thoughts of a market analyst by the name of John Shuster. Shuster, as paraphrased by Leavitt, conjectures that users "may see no compelling reason to adopt them", and I think this might actually be the most significant reason for limited interest in payment by mobile computer.

Electronic cash looks to me to be one of those science fiction ideas that futuristic writers always assumed would come about, but somehow never did. Like videophones, flying cars and talking computers, it's not that electronic cash is necessarily beyond us, it's just that it isn't particularly useful compared to the established alternative. This is why we haven't replaced our telephones with Skype, our cars with helicopters, or keyboards with microphones. And why I think few outside the mobile computing industry are rushing to replace cash with gadgets.

On what technology would you like to depend?

2012-09-07 by Nick S., tagged as dependence, freedom

While I was still thinking about technology-dependence last week, I happened to read an assertion in Jonathon Lyons' The House of Wisdom (p. 32) that "Accurate timekeeping would one day free society from the dictates of sunrise and sunset and recast the day or the hour as an abstract notion distinct from daily experience." This seemed a fairly peculiar idea of freedom given that, in Lyons' own words, it resulted in "the regular ringing of monastery bells" and "the tentative beginnings of an organized social order". Did we simply exchange slavery to the Sun for slavery to the clock?

Lyons goes on to claim that timekeeping enabled people to see "the universe as something that could be measured, calculated and controlled." I assume that Lyons was thinking about the "controlled" part when he wrote about the freedom brought about clocks, even though a clock by itself obviously contributes only to the "measured" part.

Kevin Kelly's What Technology Wants makes much of the idea that technology gives us choices. Kelly has some vague idea that technology is important, and I have no clear idea of what he thinks it wants. Nonetheless, I can see the point that the invention of clocks, for example, gives us a choice between organising our days according to the Sun, or according to clocks.

I think "us", as opposed to "me", is an important word here. I like to rise with the Sun and get to work as soon as I have eaten my breakfast. But we Australians have made a social and economic agreement that shops and offices will open at 9am, or thereabouts, irrespective of what the Sun and I like to do. Being part of this society, I have to co-operate with it.

So perhaps Lyons isn't so peculiar, given that he refers to "freeing society" and not necessarily freeing individuals. As much as modern society is dependent on clocks, electricity, telephones and the rest, we at least arguably chose to depend on those technologies rather than the Sun, human muscle and smoke signals. So long as society chooses its technologies, rather than let its technologies choose it, society's freedom seems to be intact.