The philosopher Jeremy Bentham has a place at University College London. Not an office. Even though Bentham has a huge worldwide reputation derived from a large corpus of published works, he has to sit in a box at the end of the corridor. Well, he’s been dead for about a hundred and seventy years.
As a relentless pragmatist, Bentham left his body to be dissected at a public anatomy lecture. Afterwards, still according to his will, the remains were assembled and clothed and placed in a wooden display cabinet called the “Auto-icon”. About a hundred and fifty years ago, UCL obtained the Auto-icon and it’s been on show there ever since. After some problems with students stealing his head, Bentham was given a wax one, modelled to his features, and the real head was locked away.
Bentham’s philosophical approach to morality came to be known as “Utilitarianism”, the idea of “the greatest good for the greatest number”. That may seem trivial, but what distinguished Bentham was that he pursued the idea rigorously. Thus he opposed criminalisation of anything that had no negative impact on anyone. (He wrote an article in favour of decriminalising homosexual acts. This was the 1700s, remember.) He thought that legal and social differences between men and women were absurd. He supported animal welfare, on the basis that there is nothing that can logically separate humans from other animals: a dog is more intelligent and aware than a new-born human and nothing can justify cruelty to either.
One of the criticisms of utilitarianism as a basis for morals is that it ignores intent, and simply focuses on the outcome: good or bad. Those who favour absolute moral positions might say that it is always wrong to kill someone, while a strict utilitarian might consider that killing one to save many could be morally “right”.
I’ve never really managed to articulate my own moral code, but I’m certainly not an absolutist. I don’t think it’s rational to have a fixed set of rules that must be applied to any and all circumstances. (I’m not religious, so that’s not a problem for me.) But if I was to apply utilitarianism strictly, well, that would be a kind of absolutism too. Because I have the kind of brain that is happy with ambiguity and lack of closure, I’m content to base my morals on vague and unspoken general principles. But if I had to spoke them, they’d be something like “Try not to do any harm.” and “The world would be a better place if we were all nicer to each other.”