Think It Possible You Might Be Wrong

Savita protestI don’t know exactly why I was so upset by the death of Savita Halappanavar. In part, it was because the very people whom she trusted to save her failed in their duty to do so, but that would apply to any case of medical negligence. I think that in the main, I was just appalled that anyone would be prepared to enforce a position of moral absolutism even in the face of such distress and death.

I don’t like absolute moral rules, because I think that no rule can cover the complexity of every situation. In fact, people seem to use them simply to avoid the effort of thinking. In George Orwell’s Animal Farm, a parody of totalitarian states, “four legs good, two legs bad” was a rule designed to relieve the citizens from the burden of thinking.

Though there are some issues where it’s easy to decide the side you should be on: racism, gender equality, democracy, say. I don’t think that abortion is one of those. It’s difficult, and complicated, and there is no easy answer.

But here’s how I think about it. An unfertilised egg cell is not a human being. A new-born baby is. A human life arises between those two points.

The absolutist position adopted by many (mainly religious) people who oppose abortion totally is that human life begins at the instant of conception. To me, that makes no sense. A fertilised egg might be a potential human being, but then it was a potential human being a moment before. Conception is just one of the essential stages on the way to life. Why not choose successful implantation (which often fails to occur) or some phase of cellular division?

Setting the beginning of life at the instant of conception is purely a religious, mystical belief with no basis in biology or common sense. While people are free to believe whatever ideas they like, it’s the duty of society to prevent them being imposed on others who reasonably disagree. Savita must have been distraught that her pregnancy had failed, but there is no reason to suppose that she thought she had to go to the point of death anyway. Somebody else made that decision.

Even if you do discard the mystical, if there’s to be a law, it has to be the best we can make. A law will have to say that up to some point, for some reasons, it will be legal to terminate a pregnancy, but neither of those criteria is obvious or easy.

My own instinct is to balance off the two. That is, no restriction on measures such as the “morning after pill” which prevents implantation of a fertilised egg cell; but very strict conditions from the point where a fetus might conceivably be saved medically, say 20 weeks. That’s actually more or less the UK law as it stands, but review and discussion is always good. Sticking to an absolute is never good.


Calfskin Smack

I have a heavy leather jacket, which I bought some years ago for a trip to France. [] It was October, so I wanted to be warm but stylish. Can’t go round looking like a bloody tourist. In the event, there was a freak heatwave (up to 28 degrees) for all but the last two days and the jacket didn’t see much use on that trip, but it’s served me well ever since.

It’s got a faux-fur collar and a double zip system. In fact, it’s absolutely identical to one that Porsche are selling, apart from two things: it doesn’t have “Porsche” on the zip tags; and it cost literally one tenth of what they’re charging. []

Anyway, after a number of years of wear it was becoming grubby, particularly the lining. Normally, you’d first think of dry cleaning for something like a leather jacket, but it was old, and didn’t cost much in the first place. I wondered if you could wash it at home.

Like every normal person today, I searched the internet for an answer. What I found was basically two answers. One was that, no, you should never home-wash leather, because you will spoil it irrevocably. And the other, naturally enough, was that it was no problem: washing, even in the machine, was fine.

I wasn’t as flummoxed by this conflicting advice as you might think, because what I noticed was that every one of the nay-sayers was speaking from hearsay; while every one of the supporters had tried it themselves and was speaking from experience. A no-brainer in the end.

I washed it in some “gentle” shampoo which I happened to have. (If you’ve seen me, you’ll realise why shampoo is superfluous.) I was almost certainly being excessively cautious. I mean, cows don’t shrink in the rain, do they? And it worked fine. The jacket wasn’t damaged at all, and it smelled nicer.

So that’s a life lesson. The people who tell you not to do things can be disregarded, unless they happen to have tried it themselves.

Dear Friend, I Need Your Help To Secure $300M…

brain regionsSome new research was published last month in Frontiers In Neuroscience*. In it, the authors investigated their theory that belief is a fast, automatic neurological process, but scepticism uses a slower, more fragile system.

As they conclude: “Belief is first, easy, inexorable with comprehension of any cognition […] Disbelief is retroactive, difficult, vulnerable to disruption […]”

In fact, they investigated that the disbelief process is tied to one particular brain structure, the ventromedial prefrontal cortex (vmPFC), by showing fictitious television ads to a group of patients, some with damage to the vmPFC. The numbers were small, but participants with a damaged vmPFC did show a substantial difference: they were twice as likely as those who were healthy — or had other, different brain damage — to be taken in by adverts which were even designed to “give away” their dubious precept.

The theory presented in the paper is that in everyone, an assertion is automatically believed once it is understood, but the vmPFC then takes part in a process to challenge the truth and mentally “tag” the assertion as false or unproven.

brainsClearly, at least some scepticism is essential for normal cognition and functioning in the world, but it’s likely that we all have different levels of activity in the vmPFC. I, for example, obviously have an Olympic-quality vmPFC, because I automatically challenge the truth of everything I hear. Others may be more gullible.

Unfortunately, the vmPFC is one of the parts of the brain which naturally deteriorates with age. It is known that the elderly are more likely to be taken in by financial frauds or Nigerian scams, although a general mental decline with age might be as important a factor. But it would be good practice to help elderly relatives with financial or other decisions by encouraging them to be sceptical.

I have no idea whether you can prolong the active life of the vmPFC by exercising it, but I’m going to try. Every day I’ll be disbelieving six impossible things before breakfast.

* Full article available at <>

I Find Your Lack Of Grammar Disturbing…

grammarBad spelling and grammar on the internet irritate me, but I’m not a grammar nazi. I just think our superior race should make the ultimate sacrifice for the language of the fatherland.

No, seriously. I do get annoyed, by sloppy, careless use of English, in the same way as I get annoyed by bad driving, or badly-designed products. Communicating in English is a skill, but a simple one that’s within the capabilities of almost everyone to carry out accurately. People who do it badly just aren’t trying.

Perhaps the most abused speck on your computer screen is the apostrophe. The main rules for using it are basic and simple, but many people still get it wrong. Two purposes: (1) to show possession and (2) to indicate omitted letters. And that’s it.

In fact, although I haven’t looked this up and could be wrong, I suspect that the two uses have the same origin. If you look at 16th century music, you might see, in the language of the time, titles such as “Mr. Brown, His Galliard”, which isn’t far from “Mr. Brown’s Galliard”.

I admit that there is one minor trap in the possessive pronouns, “his” and “its” (but not “her”) which are words in themselves and don’t need apostrophes. Because we also have “it is” contracted to “it’s”, the simple-minded can get confused, although it takes real determination to write “hi’s car” (but I have seen it done).

Sometimes I suggest that if you aren’t sure when to use an apostrophe, then don’t. You’ll be wrong some of the time, of course; but less often than if you pepper them in willy-nilly.

The other day I saw some internet English creep into the BBC’s news ticker at the bottom of their News 24 programme. It said that the Greek mainstream parties would “loose” support in the election. Given that “to loose” actually can be a verb, but means something like “to set free”, a strict interpretation of the statement leaves the mind-boggling. But some people would probably read “loose the dogs!” and wonder about finding them again.

It’s true that “lose” is an anomaly in English, since other words spelled similarly, “hose” or “pose”, say, don’t rhyme with it. But then, on that basis, you should know that “loose” rhymes with “noose” and “goose”. Really, for phonetic accuracy, “lose” should be spelled “looze” or “luze”, but I suppose there’s a kind of conservatism in English spelling that keeps “lose” consistent with “loss” and “lost”.

Of course, some people don’t bother to spell at all. Do you remember “text-speak”? I think it was an early 1990s fad, when SMS was first introduced and entering text was slow and cumbersome. I do think that the first person to think of using “M8”, derived from “m-eight” to mean “mate”, was creative and original. But now that our phones often have full keyboards, and in any case, automatically complete English words*, there’s no place for text-speak in phone messaging. It just makes the message harder to type and harder to read.

It’s that latter fact that makes it so objectionable in other contexts. When you learn to read, you soon get past the stage of assembling individual letters into a word — “SEE AH TEE: CAT!” — and perceive words as a unit. Experienced readers then progress to recognising entire phrases all at once. Certainly, when I find a construction like “2mro” (for “tomorrow”) in a piece of text, I get the sensation of stumbling, and the smooth flow of reading is disrupted.

It’s really all about good manners. Putting someone else to unnecessary trouble isn’t polite, so be nice. Write nice.

* although with the iPhone, often not the one you meant. See

A Modest Proposal

If you have children, then you’re condemning them, and their descendants, to consume the Earth’s resources. That wouldn’t necessarily be a problem, except that the world’s population is currently over-consuming. The total load on the planet is now about 1.5 times what it can sustain indefinitely. Technological improvements may close the gap somewhat, but it’s a big challenge even to run fast enough to stay in one place.

Because it’s everyone’s ambition to live a life like the people in Western-type countries. If everyone lived like Europeans, the consumption of the planet’s resources would not be 1.5 times over, it would be 3. The American lifestyle is even worse. It rates over 5.

ecological footprintLess developed countries are using up less than their share, at the cost of widespread miserable poverty. One country falls on the 1.0 point — whose citizens, on average, have a sustainable level of consumption. That’s Cuba; so if everyone in the world lived like a Cuban, there would currently be enough to go round. And we’d have an excellent, free medical service and a fine education system. It wouldn’t be so bad. And rum. And great cigars. And the music.

On the other hand, everybody could have the American life if there weren’t so many of us. Current predictions show the global population topping out during the next century, but without active measures it will take many hundreds of years to fall to a sustainable level, if it ever does.

There are some organizations which advocate population reduction. For example, the Church of Euthanasia recommends suicide and cannibalism (although not necessarily in that order), while the Voluntary Human Extinction Movement settles for a universal abstention from reproduction until humans become extinct.

Although I personally have decided not to breed, I think extinction is excessive. The planet could comfortably support half a billion humans comfortably I’m sure. (A seventeenth-century number.) You could say I’m comfortable about that idea.

If you do have children and you now feel guilty about the ongoing burden you have placed on the planet, you have a few options. First, and probably the best solution, is to send them to Cuba. Excellent education system, although there may be political issues to do with free expression and so on. However, they will be educated to accept that, and soon they will be calling you a “capitalist lackey”, that is, if they speak to you at all.

If you don’t like the idea of Cuba, you could always take the hit on the current generation, but ensure that it stops there. It’s a simple medical procedure — my cat can explain. They will thank you for it in the long run, honestly. Or, finally, there’s the Jonathan Swift proposal. Swift suggests “stewed, roasted, baked, or boiled”.

The Greater Good

You might have heard of the set of imaginary scenarios which have been developed to explore people’s moral choices. One version, on the PBS website, gives you the chance to make interactive life or death decisions about tiny people. []

For example, we are presented with a runaway train, about to crash into four unfortunates trapped on the track. We would like to save them by diverting the train onto a spur, but what if there was one person trapped on that track? Divert or not?

lego railway pointsThere’s a general consensus in how people respond to the various scenarios. Most people’s instinctive approach is that doing harm in order to do good can be justified if in some sense the harm is not “intentional”. For example, the majority say that they would divert the train in the first scenario, killing one to save four; but most refuse a modified sequence in which you have to deliberately toss someone onto the track to stop the train. (Second one in the PBS quiz.)

If I have a moral philosophy of life, it would be “Utilitarianism”, described by Jeremy Bentham, one of its earliest proponents, as “the greatest happiness principle”. Actually, terms like “happiness” and “pleasure” have tended to obscure the true concept of utilitarianism, leading many people, even distinguished philosophers, to condemn it for being something that it isn’t.

For example, you might think that a utilitarian idea of the greatest good for the greatest number would make the decision to chuck the fat guy in front of the train a no-brainer. One dead, four saved. But that’s too narrow a view. It ignores the fact that “you” in the scenario would have to do a dreadful, evil thing: deliberately killing a person. You might still decide that it was the least worst option, but that’s how I look at utilitarianism: a framework for thinking about moral choices, not a system of absolutes.

(At around the same period as Bentham, Immanuel Kant was arguing for the “categorical imperative”, in which you must always “do the right thing”, regardless of the consequences. It’s a concept of moral absolutism which I dislike very much. Consequences are important.)

These thoughts and theories came to mind when I heard on the news that the attempt to rescue the two terrorist captives (one British, one Italian) in Nigeria had resulted in their deaths. Perhaps untypically for a bleeding-heart liberal, I think that such military operations are morally justified. I’m not sure I believe the official line that the captives’ lives were in imminent danger, forcing the attack. That sounds a lot like retrospective justification for an operation that went wrong.

But, for the greater good, mounting the operation was the correct decision, even with a known risk that it might go wrong. It has been said that there was a difference in approach between the Italian authorities and the British, in that the Italians would have preferred to pay a ransom. The consequence of that attitude is that Italian civilians become valuable to terrorists, putting further innocent lives at risk. Like the man controlling the railway points, I think it’s consequences which count, not principles.

The Anti-Turing Test

Alan Turing is something of a hero to many people who have a background in formal computer science. Even before any real computers had been invented, Turing had laid out innovative and fundamental principles governing how they could work. In a way, he was a war hero too, being one of the top mathematicians involved in breaking German cyphers at Bletchley Park.

After the war, Turing found himself persecuted for his sexual orientation and (probably) committed suicide in 1954. It wasn’t until 2009 that the British Government issued a formal apology for his treatment.

In a 1950 publication, Turing addressed the question “can machines think” with his usual rigour, and pointed out that neither “machine” nor “think” can be formally defined in such a way as to make the question really meaningful. He proposed side-stepping the issue and instead settling for what became known as the Turing Test: can a machine “fool” a judge into thinking that it is human. If it is utterly indistinguishable from a human, then, logically, you have exactly the same evidence that it is thinking as you do of other people.

chatbotFrom a practical point of view, Turing suggested that the test could take place by exchange of teleprinter messages, so that the judge would have no clues about whether a human or machine was communicating with him.

The Turing Test was supposed to be a thought experiment, a way to reason around the concepts and examine assumptions.  Turing pointed out that the test could never prove that a machine was conscious, but then it’s equally useless at proving that a human at the other end of the teleprinter link is conscious either.

chatbot-robotThe Turing Test inspired the invention of the chatbot, a computer program that tries to respond to text chat in the way a human would. But the first of these, Eliza, really exposed the problems of using humans to judge the Turing Test: it turns out that humans are ridiculously easy to fool. Eliza was born in 1966, and with the primitive speed and storage of computers of the time, could only manage a very basic level of functionality — mainly recognizing certain “trigger words” and generating a pre-prepared response.

Yet many people exposed to Eliza’s chat quite happily accepted that they were in communication with another human, and some even flatly refused to believe the truth when it was revealed. What Eliza did could never be called “thinking”, but she was passing the Turing Test anyway. (A recent descendant of Eliza, Apple’s Siri, does have much more sophisticated language processing, but falls back on Eliza’s simple tricks to cover up the gaps.)

One interpretation of Eliza’s success is that humans are so accustomed to assuming that everyone else has thoughts that the slightest evidence is accepted. In fact, it’s common enough to assign thoughts and desires to inanimate objects: “The car is reluctant to start this morning”.

But I’ve been looking at it from the other direction. There is no way at all in which a human could prove that he or she was not a chatbot. No matter how clever, or creative, or emotional a response seemed, it could have been just the result of some clever programming. I know that I’m conscious, but I’m not so sure about the rest of you. In fact I think you’re all zombies.