25 February 2007

self deception


You will remember that the word English is autological because it describes itself; it refers to a language and it is a word in that language. Spanish is heterological because it does not refer to itself. ‘Word’ is autological, but not ‘verb.’ This is known as the Grelling-Nelson paradox which was formulated in 1908. A question I want to address is whether ‘self-deception’ is in a sense autological , even if that sense is a rather special one. I want to argue that the problem or problems with self-deception are based in language and not in the philosophy of mind, intention or consciousness.

In other words, self-deception is one of those philosophical problems we are “bewitched …. by means of our language,” Wittgenstein, Investigations §109. maybe not so much bewitched but certainly confused. However, I want to first give a very quick intro to the problem of self-deception (Self-Deception; Stanford Encyclopedia of Philosophy).

Traditionally, self-deception was considered to be that the self-deceiver believes that P while knowing or believing that –P. Hence self-deceivers must hold contradictory beliefs and intentionally get themselves to hold such beliefs. This, of course, is based on the way we interpret deception of others: A intentionally tries to get B to believe that P when all along A knows or believes that –P.

Some philosophers claim that this traditional view of self-deception introduces two paradoxes. The ‘static paradox’ asks, how can someone be in a state of mind where they hold contradictory beliefs at the same time. The strategic paradox asks, how can someone try to deceive themselves without making their intentions ineffective?

Although a minority of philosophers have argued that self-deception is not possible, the mainstream thinking is that self-deception is possible and therefore our project is to deal with the paradoxes. The intentional models of self-deception try to deal with the problem of how is it possible for someone to intentionally get themselves to believe something they know or believe very well not to be true. Some have argued that the self-deceiver introduces some form of psychological partitioning that will protect them from their deceptive strategy. Loss of memory or maybe “accidentally” changing the facts are two ways of doing this.

The non-intentionalist approach does away with the intentional interpersonal deception model (deceiving ourselves as when we deceive others) and opt for some form of motivationally biased belief. Hence, to be self-deceived is nothing more than to believe falsely. In other words, we desire ourselves to have false beliefs. Some intentionalists object that introducing motivation or desire fails to distinguish self-deception form wishful thinking or daydreaming. The encyclopaedia entry has a very detailed review of the modern debate on self-deception.

Of course, the non intentionalists seem to have a point, something is going on here and the evidence points more towards the existence of this something rather than the paralysis of a paradoxical state of affairs. The problem for the intionionalists is of course the intention. However, we can only assume that intention is involved because of two basic facts: 1) interpersonal deception (deception) does involve intention on the part of the deceiver; I would even go so far as to say that we probably reach this conclusion from some personal experience. 2) the self-deceiver behaves as if they are acting intentionally and acting intentionally to deceive themselves. I will leave item 1) for later, but as far as 2) is concerned “acting as if..” is not the same as “acting from..” or “being..” Of course, this takes us more towards the non-intentionalists solution.

Thus behaviour is not necessarily always a good indicator of what the intentions are or, better still, the cause of the behaviour. We can, however, say that this contradictory behaviour, or attempted behaviour, is probably some sort of conscious-unconscious battle. We might go a bit further and suggest that some beliefs are caused mentally or psychologically while others might be due to some “brain” activity. With the conscious self-deceiver being unaware of the latter cause. It might be objected that this might take us outside the scope of deception altogether. At least, motivation or desire do leave some scope for consciously causing an intentional act. We usually allow some form of intentional will to desire something.

Maybe by going too far into reductionism (unconsciousness, brain activity and so on) we might not only miss the wood for the trees but might even exit the forest altogether. One question that does not seem to be asked in a debate on self-deception is this: who benefits from self-deception? Does the self-deceiver behave and act as if they were deceiving themselves to influence others or to influence themselves?

Although this line of thought might introduce a dimension about the motivation of behaving “as if” one is self-deceiving one’s self, it might explain certain instances, if not all instances, of self-deception.

Take the following scenario: A knows that x is dangerous, but if A believes that x is not dangerous, others will believe the same as A. Maybe, if everyone believed that x is not dangerous than we won’t have to deal with the implications of the danger. This example might be regarded as straight forward deception, but the point is that the others know that x is dangerous, but because A now behaves as if they believe that x is not dangerous the others might follow. After all, as I have just said, behaviour is the only practical way we have to discover someone’s intentions. And even if they honestly say that they intended something, but behaved in an other way, we would still rely on the behaviour most times rather than the honest words.

We can look at the question, who is benefiting? from a different point of view. We can reasonably assume that manifestations of self-deception are also public. We can see and observe that someone is acting as if they were or trying to self-deceive themselves. If we didn't have this evidence we wouldn't know what was going on with the other person; especially if they behaved consistently.

In other words, self-deception can be seen as an act of communication whether it is done intentionally or not. If we accept this step we must also accept the next step that an act of self-deception is also conveying information; information that in principle can be processed and understood by observers. But as Dawkins showed us, the purpose of communication is to manipulate others; of course manipulation can, in this context, be morally neutral.

This argument, if we accept it, points more towards self-deception being there for the ''benefit'' of others than the self-deceiver. But it does not follow that the self-deceiver automatically wants “to deceive” others in a negative way. Take for example the parent who suffers from fear of dentists. They know that they are going to feel pain when they go to the dentist; maybe they might even have a clinical condition that limits the use of anaesthetics, whatever. But this parent wills themselves to believe that it is not painful to go to the dentist. This is not done so much for their benefit, but to give their child the impression that going to the dentist is no big deal, and the child won't, therefore, create problems going to the dentist.

But if we accept this positive argument, can we seriously continue talking about self-deception? Of course, if the self-deceiver hopes to manipulate us in a negative way (e.g. to defraud us), then holding contradictory beliefs should be a warning sign and maybe proceed with caution, treat them as unreliable or simply let the men in white costs to sort it out. But if this were the case, are we still talking about self-deception?

So how did we get into this mess in the first place? This is where Wittgenstein’s observation comes in, although what follows are my arguments.

Autological words refer to themselves. Hence, is self-deception, as a noun group, something that refers to itself? In other words, does self-deception deceive itself? I said that I was using autological in a special sense. What I mean by self-deception deceiving itself is for it to change its meaning whilst still maintaining an impression that it is following the semantics and syntax norms of these types of noun groups.

Hence, if self-deception is to be understood as, desiring to hold contradictory beliefs, or neural or mental blocking of ideas or deception of others, or oblivious to reality or unreliable or just crazy, then self-deception is autological. It has changed its meaning.

Mind you, this is not the same situation as those sentences or phrases that refer to themselves and create an identity, logical or semantic paradoxes. For example: ''This sentence is false.'' Or, “This sentence is objective.” Don’t forget that traditionally, the paradox of self-deception was one of intention and not of meaning.

If self-deception applies to itself, in the meaning I have outlined, then surely, the non intentionalists have practically won the argument and Wittgenstein was right about philosophy and language. All we have to do now is to discuss what people try to do to themselves and others, and forget about intention and all that.

If, on the other hand, self-deception is heterological, it does not change its meaning and applies the norms relating to the use of self as a combined form or as adjective in an adjectival noun group (or combined form groups); what are the implications?

The character of the prefix “self-“ or adjective in these groups is that in the most commonly used groups the noun/verb does not change the meaning or use . Thus, self employed, the employed part does not change the meaning when combined with “self.” The same with self service, self mutilate, self assessed /as in tax forms/, self assured and so on. Basically, What we do for others we do the same for ourselves.

There are of course, some exceptions. The one I can think of now is self-made, as in self made millionaire, meaning to achieve a grand success without the help of others, certainly by not being employed. But the curious thing about 'self-' is that there are a group of words that cannot be combined with 'self-'. Or rather, they make no sense if combined. For example, self conceive; self create; self stalk; self back scratch (is this English?), self manipulate; self elect and so on. Some members of this third group might even be oxymoron. My favourite example is to perform a self autopsy.

The question is, where does self-deception fit, in all of this? In all the normal examples in the first group, there is no question that the act being performed is one based on intention; irrespective of whether it is rational, irrational, positive or negative. Moreover, nothing is done by accident. It is also evident that there are examples of words which we cannot apply self- to. Hence, is self-deception a member of the first group, common combined group, the middle group, like self-made, or the third group, the excluded set of words.

This leaves us with deception; I now continue with item 1) above. Of course, this is not the place to go into an in depth analysis of deception. However, we are interested in some important features of deception. Deception is not only about intention, although it is does play an important part. Sometimes deception is about morally unacceptable actions, but not always. Principally, deception is about disinformation to obtain some advantage over someone else. I use disinformation specifically to exclude accidental misinformation. Hence deception is about communication to manipulate others. In a way, it is about creating an impression of cooperation, maybe even in an evolutionary sense, but in fact it is a trap to disadvantage the other person. The serenading of the male cricket or the camouflage of a soldier, the objective is the same, to manipulate the actions of the receiver of the message. To bring the female to the male cricket in the first place, and to keep enemy soldiers away in the second. Deception is also used to obtain an advantage immorally; for example to defraud someone of their savings or to sell them poor quality products; whatever.

When a person is deceived they are, however, deceived not because someone intended to deceive them, nor because the deception was neutral or immoral, but because the deceived person received false information (disinformation) and this information was interpreted as if it was genuine information. The deceived person is not in a state that they know or believe –P, but that the deceived person believes that it is the case that P.

It is not clear how disinformation fits in with self-deception. What is its like to disinform one’s self? (see Nagel: What is it like to be a bat? for the thinking behind this question). it would be interesting to find out how it is possible to disinform one’s self when the key factor of disinformation is that the person being disinformed does not know that they are being disinformed.

The self-deceiver is also not in the same epistemological state as someone who is being deceived: i.e. the deceived person believes that P, but the self-deceived person is supposed to believe P & -P. But as I said earlier, P & -P implies either that the person is unreliable or worse, crazy. I would therefore say that the “deception” in self-deception is not the same deception as the interpersonal deception. Self-deception, in my view, does not contravene Aristotle’s principle of non-contradiction which states that something cannot be and not be at the same time. The self-deceiver is either unreliable or crazy if they hold P & –P. Hence, self-deception does not imply that P & -P, because deception (of others) does not imply the deceiver holds –P. As far as the deceived person is concerned they believe that P.

Moreover, disinformation is missing in self-deception which is present in deception. In any case it is not clear how we are supposed to manipulate ourselves when we disinform our selves.

It seems that Wittgenstein was right, because the fuss about self-deception seems to be really based on a language problem. The combined form of self and deception behaves in the same way some combined forms of combined form groups behave in English. The new group does not inherit the individual meaning of the words when they are used independently. Self-deception as a combined form group not only changes its meaning, but has got us fooled all along. The non-intentionalists seem to have a point.

Conclusion: self-deception is all about good old fashioned communication.

Take care


25 February 2007

23 February 2007

Subjects Discussed 25-02-2007

@Pub Philosophy Group – Madrid Topics Discussed

in alpha. order

  1. Aging
  2. Archetype
  3. Are there moral principles?
  4. Attraction
  5. baby boomers
  6. Being One's self
  7. beliefs
  8. Can we live without wars?
  9. competition
  10. Creativity
  11. decisions
  12. Do we need ambition to succeed?
  13. Does language determine culture?
  14. Double Standards /hypocrisy/
  15. Education ..what is ethical?
  16. environment
  17. Escape from reality
  18. ethics of solidarity
  19. Euthanasia
  20. FAMILY……
  21. fate
  22. fear
  23. forgiveness
  24. Freedom and Privacy
  25. freewill
  26. friends
  27. Fundamentalism
  28. good and evil?
  29. guilt
  30. Happiness
  31. hedonism
  32. How does language affect reality and perceptions?
  33. Human sexuality
  34. humour
  35. idolatry
  36. image and society
  37. immigration
  38. information overload
  39. integrity
  40. Interpreting Reality
  41. Is art necessary and/or liberating?
  42. Is entertainment necessary for life?
  43. Is it better to be young or to be experienced?
  44. Is it love or is it sex?
  45. is it possible to be honest in politics?
  46. is knowledge dangerous?
  47. Is love necessary for life?
  48. Is pain necessary for life?
  49. Is sexism natural?
  50. Is there a struggle between the conscious and the unconscious?
  51. Is there an end to .......?
  52. Is there such a thing as right or wrong?
  53. Jealousy and Possession
  54. Justice and Revenge
  55. life and death
  56. Living the moment
  57. loyalty and infidelity
  58. Luck vs Talent
  59. lying
  60. madness
  61. male-female archetypes
  62. mixing cultures
  63. Models in our life and society
  64. multiculturalism
  65. music
  66. Neurotic Society with the environment.
  67. nothingness
  68. optimism
  69. pain
  70. pleasure
  71. political truths vs social lies
  72. Relationships and Conflicts
  73. Religion and Education
  74. Risk
  75. satisfaction
  76. self deception
  77. Selfishness and Altruism
  78. sense of humour
  79. should there be limits to democracy?
  80. Should there be limits to science?
  81. social evolution
  82. Social groups and Vested interests
  83. social responsibilities
  84. symbolism and religion
  86. the benefits of immorality
  87. The consequences of ignorance
  88. the end justifies the means
  89. the end of ideals
  90. The ethics of solidarity
  91. The ethics of solidarity
  92. The forming of public opinion
  93. the impact of technology on us
  94. The individual and technology
  95. The necessity of Faith.
  96. The new female - male revolution.
  97. the role of gender in crime
  98. the value of experience
  99. TIME
  100. to have or to be
  101. To what extent are we masters of ourselves?
  102. Tolerance – intolerance
  103. totalitarianism
  104. Truth
  105. Violence
  106. Virtues and Vices
  107. What are the necessary and sufficient conditions for friendship?
  108. What do we get out of philosophy?
  109. What do we get out of philosophy?
  110. What is beauty?
  111. What is insight?
  112. What is intelligence?
  113. What is philosophy these days?
  114. What is the meaning of courage and cowardice today?
  115. What is wrong with evil?
  116. What makes a being human?
  117. What’s free about your will?
25 February 2007

18 February 2007



For those who are familiar with trains in England would remember that some trains and the underground use (used) three rails instead of the normal two rails. Not surprisingly, this third rail was called, "the third rail."

Unlike most other trains, this third rail supplied the electricity to the train, instead of the overhead power cables which are more common these days. There is a lot to recommend this third rail system, the most important of which is that the countryside is not defaced with those ridiculously hanging cables, with out of place pylons and other contraptions. The down side is that they are dangerous if one happened to step on these rails. Dangerous in the sense that having lunch with sharks can be dangerous. But why would someone want to have lunch with sharks is too big a question to answer now.

In dealing with madness (and its synonyms) we have adopted the equivalent of a third rail system. On the one hand we have madness as a medical condition and on the other, madness (insanity) is involved in a legal context. However, insanity as a defence in court is not that common. These are the two binary contexts where we commonly find madness as the object of our attention.

Philosophy is the third rail. And as long as medicine or the law don't inadvertently step onto this third rail, everything should be fine. To give you a taste of what I mean I will consider these two questions. For a long time madness in medicine had been the domain of psychiatry and maybe psychology. Always trying to reach that gold standard of normality; in other words, to help troubled patient achieve a normal mind. However, with the advent and progress in neurology, with its tools of MRI, fMRI, genetics and so on the debate has shifting from normal mind to normal brain. In other words, should we be talking about normal minds or normal brains? And although the neurologist is clear about his or her mission statement and the psychiatrist about his or her mission statement, the answer to the question “normal mind or normal brain?” has deep and serious implications.

Take the legal situation. Is the law concerned with administering punishment or social engineering. For example, is the law there to punish people who transgress the law or to remove people who transgress the law from society? In other words, is the law concerned with justice or social engineering?

If we want to punish people, then we have to exclude mitigating circumstances and administer the punishment. Hence, whether someone was abused when they were young or live in poverty it's just immaterial. However, if we want social engineering then it won't be the jury and the courts who decides what happens to people who transgress the law, but maybe a neurologist who is in a much better position to tell us what is a normal brain, and by implication reasonable behaviour.

Of course, when I say normal brain, normal mind or even social engineering I am not necessarily thinking about the state of affairs as it is now. Maybe answering the question what is a normal brain is as difficult to answer as what is a normal mind. But we, as philosophers, are free to look further a field than the here and now. We can, for example, understand the normal brain idea to be analogous to Plato’s forms. Some ideal standard or a gold standard, if you like, of what is a normal brain. Maybe one day neurologists might be able to perform some sort of benchmarking for human brains in the way that companies today do benchmarking for all sorts of gizmos.

My point is that once we touch that third rail of philosophy, most probably, someone is going to get hurt. The question, is it a normal mind or a normal brain?, is a philosophical question (value judgement even about science), But the outcome to this question is very important; for example, if we decide that it's a normal brain then research, maybe drug research, would focus on changing the composition of the brain and not changing the behaviour of the patient. Similarly, for the legal system, the debate between punishment and social engineering Is a philosophical question. And if we opt for social engineering then we can expect to see the law moving towards administering justice through a test tube.

There is a good essay on mental illness on the Stanford encyclopedia of philosophy, written by Christian Perring. I won’t go into too much detail about this article except to identify some interesting philosophical issues, maybe even more philosophy of science and medicine.

An important debate which this essay identifies is whether there is such a thing as mental illness and as a consequence whether psychiatry is a real science (questions asked by Szasz)? Although the essay points out that psychiatry is a respectable science these days, the issue has not gone away. Is disease, by definition, a bodily disease? In which case we lose the mental part of the disease.

But there still remain the issues of what is normal and what is a disease? What is normal is a difficult question to answer for psychiatry, neurology and philosophy. in my opinion the problems lies in the fact that we can easily decide whether someone is behaving normally or abnormally. Evolution has made sure we can answer this question, at least, at face value. But when we come to establish an a priori definition of “normal” we come across all sorts of problems.

Maybe it is relatively easy to establish abnormality if we can establish an abnormal brain or physical damage to the brain. But what about behaviour which is the result of trauma, fear or aggression by others. What is the disease: the irregular patterns in an MRI or the bully who is terrorising the neighbourhood? And what should we treat: the bully or the brain? Maybe this might prove easy to answer?

What about cultural differences? Is it “normal” for a patient to object if their doctor is a male or a female? In our culture a doctor is a doctor, and whether they are male or female is just immaterial. But that’s our culture. So even if there is an electron microscope answer to what is normal, there might not be a similar answer to what is normal with culture playing a relevant role? You would have noticed that we are that close to touching that dangerous third rail. We touch that third rail as soon as we ask the question, whose culture should we follow; ours which says that the sex of the doctor is immaterial or some other culture which says that it does matter? Maybe we haven’t been electrocuted yet, but now consider this question: should female foeticide be allowed simply because in a culture females have no value at all? Is it madness or normal for a mother to abort a foetus simply because females are considered as having no real value?

However, today we know enough that hereditary linage is preserved through the maternal mitochondria. Can we go from here and say that at least from a genetic argument point of view, a female who destroys a female foetus is, in effect, “genetically” mad? She’s destroying her genetic lineage. And can we imply that societies who condone female foeticide (or similar anti-female behaviour) are in effect mad and crazy because they are condoning the destruction of their linage? What we know for sure is that which ever way we answer these questions, we are going to get philosophically electrocuted.

Although in reality medical madness and legal madness are poles apart, they do come together when they consider certain issues, for example, personal responsibility. Excluding addiction and other abuses, how can we hold someone to be responsible for their actions if their brain falls far short of the gold standard? And if responsibility has now become a matter for the electron microscope to establish, have psychiatry and neurology destroyed morality? Maybe not, our evolutionary instinct has no problem deciding what is a normal person and who looks suspicious, even if sometimes we get it wrong. And if we can arrive so far, we can also arrive at having a semblance of a functional moral system; sans electron microscope.

But this dilemma of moral responsibility, is not only a central issue for medicine, but also for the law. The law (Wikipedia: insanity defence) recognises that sometimes the mental state of the person on trial is a relevant issue. And although mental illness and insanity differ from jurisdiction to jurisdiction, the idea is relatively clear, “punishment is only reasonable if the defendant is capable of distinguishing right and wrong.” But using insanity or mental illness as a defence could very well be a double edged sward. In England, for example, if a defence of insanity succeeds, the courts can still issue a hospital order against the defendant which has no limits. Hence, although the defendant might not go to prison, they might still end up in an institution for much longer than had they gone to prison. For this reason, insanity defences are not very common. (The M’Naughton Rules (1843) are the main test for of insanity in most common law jurisdictions).

The legal position on madness (insanity, mental illness) seems to centre on the fact that justice must be seen to be made. Hence, someone might indeed have a mental illness or be insane, but nevertheless they will still go through the trial and punishment process; for example Michigan was the first state to establish the verdict, Guilty but mentally Ill (1975). The legal process takes precedence.

In a short article, “Mad Bad or Ill,” published on the welcome trust website (http://www.wellcome.ac.uk/doc_WTX024068.html), it is reported that the philosopher, Professor Glover (with others), is conducting research amongst inmates from Broadmoor Special Hospital ( a hospital in the UK which houses people who have been institutionalised by the courts) and who suffer from Antisocial Personality Disorder. Glover, is trying to find out what are the moral beliefs and moral reasoning of these patients, who would otherwise be considered criminals. For example, his interviews include such questions as “the only space in the car park is the disabled parking space, do you use it?”

However, in the words of Glover himself, these interviews are proving more difficult because…[the patients] are too demented to take part or put up a front. That this sort of field research should prove difficult does not come as a surprise. The problem, and it is an old problem, is that it is one thing to ask people what they think they would do in a given situation and what they actually do in the event that they do find themselves in such a situation. In reality we don’t know what people will do in real life when faced with a serious moral dilemma. One reason is of course, that most of us do not have to face serious moral dilemmas on a regular basis and hardly the same dilemma, anyway.

Hence, the real problem for Glover, is that he is facing some sort of mirror image of the is/ought problem. You will remember that Hume drew our attention to the incongruity of jumping from is (the past) to what ought to be (the future.) In Glover’s case he is faced with what is in the future (what someone will actually do in a real situation) from what ought to be done (what someone believes they will do). If it does not work from what is to what ought to be, why should the world be more predictable from what ought to be to what is? We can use the same objection to those psychology studies who purport to arrive at moral truths by subjecting a few undergraduates to experiments in the comfort of a university laboratory. Maybe, following combat troops in action with a clip board might produce better and realistic results about what is a moral standard.

I use the Glover research, which no doubt is very useful and probably good things will come out from it, to show that the third rail, philosophy, is just as dangerous to jurisprudence as it is to medicine. That is, trying to find what is morally normal in law. However, although the meaning and the use of madness is very different in the legal context from the medical context, there are parallel similarities especially in issues and consequences. The references I give above discuss these similarities and differences in detail. What I am interested in is that both disciplines are concerned with responsibility. That is, if someone is suffering from mental illness are they responsible for their actions? For the legal profession the objective seems to be to protect the notion that we act with a free will or as if we act with a free will. And in a way, the medical profession aims to restore this idea of free will, or at least as if the patient is restored to acting with a free will.

However, as I pointed out, the future consequences might not turn out to be as clear cut and friendly to all parties. The more we move away from mental illness to brain disease the less scope we give to the legal process. And although the courts might always be involved with punishment they would certainly have to share their powers for social engineering with medial experts. In other words, if the courts tried to investigate the normal brain, they would be acting as neurologists and not lawyers.

Now, whether we use the M'Naghten Rules or MRI or our evolutionary instinct, we all would agree (philosophers ought to be cautious about such agreements) that mad is bad and negative. It is therefore no wonder that both medicine and the law are very much concerned with establishing what is normal by identifying the abnormal. But is this enough?

The reason why we shouldn’t agree to any general statements is because we haven’t yet investigated what “normal” people think about madness. One thing we can do about this is to look at how we use the word mad(crazy) in our everyday language. In everyday language, mad has (mainly) four meanings:

1) unusual, unconventional, risky,

2) anger,

3) hurriedness, fast,

4) a lot, a huge amount.



For example:

1) He must be mad to leave his city job to go and observe fruit flies mate.

2) He was hopping mad when the fruit flies were blown away by the wind.

3) He rushed away in a mad run trying to follow the fruit flies in the wind.

4) Clearly, he is mad about fruit flies.



In our “normal” language “mad” represents common emotion and feelings, but with a high or extreme intensity. And this is an important issue for philosophy. Is an emotional expression the same as a behavioural expression? Maybe the issue for madness is not so much an issue of behaviour, but an issue of how we control our emotions. I submit that a rational and normal agent would have a high degree of ability to control one’s emotions. Of course, there are always limits. Having lunch with sharks is quite unreasonable and irrational, even of one is not afraid, in the same way that it is quite reasonable and rational to be angry if one’s fruit flies are blown away.

Take care


18 February 2007

11 February 2007

the impact of music on us

[this essay could do with a second check, but I don’t have the time]

the impact of music on us.

In 1913, Igor Stravinsky had the debut of The Rite of Spring in Paris. Three minutes into the ballet, the audience could not take it any longer and started to riot. More about the effects of music later on.

I won't be misrepresenting reality by saying that our general perception of music is one of pop groups, top twenties, CD's and mp3's, sound that accompanies films and adverts and of course that annoying noise which is played in bars and restaurants. Some would go a bit further and include classical music, concerts, musicals and for the very few playing a musical instrument, which usually involves the reproduction of the works of some composer or other.

However, most would agree that music is an important art form and some will equate listening to music as a spiritual experience. There is no doubt that music can have such an effect on people; it is supposed to do that. And there is no doubt that music is important, it is found in all human societies, dates back millennia and is also present in the animal kingdom. So there is no question that music is important.

However, what I want to look at is music as a human activity and not as an artistic or aesthetic sound.

So taking the aesthetic factor from music we are left with the physical content, basically sound. Many scientists describe sound as the same as touching or maybe touching at a distance. Physiological, the sound, vibrates the bones in the inner ear, which in turn vibrates the liquid surrounding these bones. The vibrating liquid moves the hair in our inner ears and this generates an electric charge which travels to the brain and is then converted into meaningful sound.

For this essay I will be referring to two main references; a radio programme that was aired in April 21, 2006 on Radio Lab, WNYC Public Radio (RadioLab); the name of the programme is Musical Language. This is also available as an mp3 download. This programme is a good introduction to music as a physical phenomenon and as explained by science. The other reference is the 2006 Reith lectures which were given by Daniel Barenboim (Barenboim). You can listen and read the five lectures on the BBC website. All details below.

Barenboim, laments the fact that we live in a visual society, where sound play a secondary role in our life. This is unfortunate because, as he points out, we can start hearing seven and a half months before we are born. In other words, hearing as a human sense has a seven and a half month head start over sight, yet the minute we are born, everything is dedicated to sight.

If we take the fact that hearing is a very basic biological function and the fact that hearing has the same status as touching, we can immediately see some very serious implications. These implications are related for example to noise pollution, maybe in the same sense that information pollution affects our sight; at least partially anyway.

So if we take hearing as touching, the most intimae of senses, then we can only conclude that noise pollution is a form of physical assault; maybe not in the legal sense of assault but certainly in a moral sense. Does this mean that noise pollution should have some sort of status as groping someone? In the radiolab some of the scientists interviewed describe sound as touch at a distance. What is of interest to us is how to interpret this touch at a distance idea.

Should this idea of sound being touching is established as a valid principle, the consequences could be far reaching. Barenboim's point about living in a visual society is quite telling. For example, most societies have stringent rules and laws about what can be exhibited in public. Nudity, for example, is a taboo, graffiti is probably regarded as criminal damage, and public images are always of perfect situations, perfect people or perfect everything. In the world of sight we regard images as having some status which needs regulating and managing. Images even have a moral content; holy pictures, pictures of religious figures which may or may not be offensive and so. For example, pornography is always regarded as images, but today we never think of pornography other than images never as written language, which of course can be equally graphic. However, can we speak of music (not songs) as being pornographic or offensive? But what is more offensive than someone touching us without our consent? Or, can there be a piece of music that is blasphemous? Mind you not the title of the music, and certainly without any lyrics; just the sound of musical instruments? Can these be blasphemous?

But when it come to music in public spaces there is no respect or thought about what we are subjected to hear. Barenboim described himself as having ''suffered tremendously'' from having to listen to musak in elevators. But besides being offensive, musak is also an arrogant form of expression. Apart from being exposed to listening to something which we might not want to do, the arrogance is that others assume to know what we like and enjoy.

We also know that music can be used to literally manipulate the behaviour of others. Supermarkets have used music (see note below about possible reference) to influence the shopping habits of customers. For example, playing French music to make customers buy French wine; even smells of freshly baked bread makes customers buy bread. Music is also used to psych up soldiers in battle and not just in films.

However, we use music even more effectively than musak or marching bands. We employ music in our language when we communicate with others. A majority of languages are tone languages (Tonal language: Wikipedia); a high or low pitch pattern is permanently associated with a word to give semantic meaning to it. Some Chinese languages are tone languages, which is why they is difficult to speak for non native speakers. A language like English depends more on intonation, with rising or falling pitch usually at the end of a sentence or word.

We effectively use music to influence others with our spoken language; not that it's any different for written language. Notice the sound of the language we use when we ask someone for a favour; now compare this when we are angry with someone. Children scream because they know that we cannot stand that sound for long; maybe in the same way that the audience in 1913 couldn’t stand the music. We can even go so far as to say that romantic language is also romantic music. Prof Trummel, in RadioLab, says that when we speak we sing. In fact monotonic speech is not how humans talk.

Of course we are more or less masters of our mother tongue language, but what implications does the “music” of one language have on others who are not native speakers of the language? Prof Diana Deutsch, RadioLab, studies the music of languages, and in one experiment she tried to assess the pitch of a group of Chinese and American children. Her results showed that 74% of Chinese kids had perfect `pitch and only 14% of American kids had perfect pitch. Chinese, unlike English, is a tonal language and she has a hunch that this disparity is due to language.

If we even take this as a working hypothesis and accept that language can influence what we hear and how we hear it, we can immediately ask ourselves two questions. The first is this , is it really possible to learn a second language to a native speaker standard? This is not to say that we cannot fully communicate in the second language without any problems and without any relevant discrepancies. The issue is what subtleties are we missing out due to this fundamental differences in languages? Moreover, what does it really take to learn a second language to native speaker standard; is it possible? And if we cannot hear a second language in its full glory, what are the repercussions, for example in politics or business? Do people think we are silly because we cannot speak their language as they do, so they believe they can disrespect us? Would a misplaced intonation be regarded as an intentional insult?

We know however, that these are not just academic questions. This is what Barenboim has to say. “The sound [music], the German, the so-called German sound in many ways is less harsh at the beginning of the note. Probably - and this again is very subjective - probably due also, not only but due also to the fact that the German language has such heavy consonants.” And this idea that language can affect the sound of the music originating from that culture or nations has direct implication on the music itself. Consider this quotation, “I have yet to find a German musician who feels the same degree of closeness to La Mer of Debussy as he does to the fifth symphony of Beethoven. And in the opposite direction as well. For fifteen years I was conductor of the Paris orchestra, and believe me it was very difficult to get the French musicians to feel the kind of, not only enthusiasm but atavistic attachment to the fifth symphony of Beethoven which they did perfectly naturally with La Mer.”

You might object to all this as being speculation and hunches as Deutsch called her perfect pitch and language connection. But we do know from Dawkins (The extended Phenotype), for example, that the most effective and efficient way for a male cricket to influence a female cricket is to sing to her. Music works for crickets and music works for humans as well.

Going back to Stravinsky and the rite of spring, Prof. Jonah Lehrer, radiolab, tries to explain what happened by appealing to brain chemistry. It is believed that there is a group of neurons that their sole job is to find patterns in any new noise or sound we hear. When these neurons repeatedly fail to find a pattern they start releasing dopamine into the brain, which, in small does, makes us high and euphoric. But when dopamine is present in large quantities it makes go crazy. This, it is argued, is what happened in 1913, the dissonant sound (harsh / unarguable sound) of the rite just made people go crazy and rioted. A year later people were prepared and Stravinsky was hailed as a hero.

This episode together with idea of touching at a distance is important for us. It is important because in principle music can be subversive with literally physical consequences to people and society in general. But we know this already, the Beatles started a social and cultural revolution and more recently heavy metal music and the like is very popular with fringe groups in society. Religions understand this idea very clearly and use music to good effect to control people.

This is what Barenboim has to say on the topic of subversion: In times of totalitarian or autocratic rule, music, indeed culture in general, is often the only avenue of independent thought. It is the only way people can meet as equals, and exchange ideas. Culture then becomes primarily the voice of the oppressed, and it takes over from politics as a driving force for change.

I would agree with Barenboim up to a point. The problem is that Barenboim does not reconcile this “voice of the oppressed” with the normal way people experience music.

In the following quote that appears a few lines after the quote above quite rightly identifies the task of the musician as: Now, when you play music, whether you play chamber music or you play in an orchestra, you have to do two very important things and do them simultaneously. You have to be able to express yourself, otherwise you are not contributing to the musical experience, but at the same time it is imperative that you listen to the other. You have to understand what the other is doing.

This is the problem, most people, in fact, the vast majority of people only experience music as observers, never as creators of music. The vast majority never get to be involved with “expressing” themselves. Music for them is only the voice of the oppressed in so far as the oppressed listen attentively, but never as expression of personal contribution to that voice.

This, I propose, is the fundamental philosophical issue about music. Why is it that music is not something we are brought up to create as individuals? In simple language, why aren't we give a musical instrument when we are young and told to get on with it? In the same way, maybe, that we are givens pencils or a crayon and told to get on with it. And why is it that when it comes to music we practically have no choice but to reproduce other people's work? We do we not do this with painting and literature. In these cases we strive to be original, to create our work and to express ourselves. No one has achieved great height in literature by simply copying the works of Shakespeare nor became a master by simply copying the works of Rembrandt. But this is what we do with music. We reproduce, mainly, what others have created; hardly ever what we create.

Let’s get rid of the practical problems first. As every infant will tell you parents, and mothers in particular, are not exactly fond of brat created dissonant sound; we don't have the expression in English, children should be seen not heard, for nothing. It's bad enough as it is, having a 1913 every few hours is not conducive to the well being of parents and society in general.

Musical instruments cost money not to say they are also very delicate. A violin was never made to be plucked by the destructive hands of an average five year old.

Another practical issue is that exciting music is usually a group effort which Barenboim quite rightly points out. A single fiddle playing is not the same as a violin playing in an orchestra.

All these practical objection mitigate against music being created by individuals exploring their own sense of expression. These practical problems just beg us to perform music in groups and to repeat works we know.

But they are problems because we don’t have practical solutions. What if we had play rooms at home were we could play our music instruments to our heart’s content, or community places to play instruments in the same way we have community gyms or sports facilities. In stead of giving hi-fi system as a present, why not give a musical instrument? In fact why not build houses and flats sound proof from external sources?

If we can solve these practical issues what are we left with? The most important factor that we are left with is the opportunity to express ourselves in one of the most primeval instincts we have inherited. But being brought up to freely expressing ourselves is not conducive to a micromanaged political and social structure. Having the guts to say I want to play this or I feel of expressing myself in such a way is not in line with manipulating others for our survival. Expecting people to think freely, feel freely and express themselves freely, is too subversive an idea in the 21st century. But it seems that music can give all these qualities.

In other words, expressing oneself, cooperating with others and listening attentively are all personal activities which challenges the status quo. I am sure that we can all see the difference between creating music as an individual and listening to music as a passive observer; between expression and passivity. But what interests us as philosophers, is which parents are prepared to challenge and risk the status quo and allow their six year old kid to have a great time banging away at the pots and pans? Who is going to allow us to express ourselves as we wish?

Take care


11 February 2007

Radio Lab, Musical Language, Show #202, Friday, April 21, 2006


Reith Lectures, 2006, Daniel Barenboim,

This year's lectures are entitled: In the Beginning was Sound.


Using Background Music to Affect the Behavior of Supermarket Shoppers

Ronald E. Milliman

Journal of Marketing, Vol. 46, No. 3 (Summer, 1982), pp. 86-91


There is a reference to music in supermarkets and shopper habits. Unfortunately, I don’t have full access to document, only the abstract.