Things to Do with Moonlight

By Allen Curnow

I
Holy Week already and the moon
still gibbous, cutting it fine
for the full before Jesus rises,
and imaginably gold
and swollen in the humid heaven.

First, second and last quarters
dated and done with now,
the moon pulls a face, a profane
extemporisation,
gold gibbous and loose on the night.

Hot cross buns were never like this.
the paschal configurations
and prefigurations could never have
nailed the moon down
to the bloody triangle on the hill.

By the spillage of light the sea told
the cliff precisely where to mark
the smallest hour when I woke
and went out to piss
thankfully, and thought of Descartes,

most thoughtful and doubtful pisser,
who between that humid light
and dark of his mind discerned
nothing but his thoughts
e. & o.e. as credible, and himself

because he had thought them, his body
had a soul, his soul had a body
an altogether different matter,
and that made two of him
very singularly plural, ergo

sum couldn’t be sumus. He thought
deeply and came up with a solution
of blood in spirit, holy adhesive,
God, singular sum
best bond for body and soul.

II
And the height of the night being humid,
thoickedned with autumn starlight
to the needed density and the sea
grumbling in the west,
something visceral took shape of an idea,

a numen, a psych, a soul,
a self, a cogitation squirmed
squirmed, somebody standing there
broke wind like a man
whose mind was on other things.

His back to me and black
against the gibbous gold
of the godless moon, still blinking
the liturgical full,
something stuck its ground like a man

in a posture of pissing out of doors,
thankfully by moonlight, thinking
of pissing, experiencing the pleasure
and the pleasure of thinking
of pissing, hearing also the sea’s

habitual grumble. Descartes?
I queried, knowing perfectly well it was.
And he to me, Your Karekare doppelgänger
travesties me no worse
than the bodily tissue I sloughed in Stockholm–

no wonder I caught my death
teaching snow queen Christine,
surely as her midnights outglittered
my sharpest certainties
an icicle must pierce my lungs

(at five one midwinter morning,
the hour she appointed for philosophy
by frozen sea, freezing porches)
and my zeroed extension
wait there for the awful joyful thaw.

There’s the customary stone I’m sure,
with customary lie incised,
the truth being I exist here thinking,
this mild March night.
As for the thought, you’re welcome.

III
No less true it was I, meaning me,
not he that was physically present
pissing, and metaphysically
minding the sepulchre
not to be opened till after the full moon.

Cogito. I borrowed his knife
to cut my throat and thoughtfully
saw the blood soaking the singular
gold humid night.
Ergo sum. Having relieved myself

of that small matter on my mind,
I leaned lighter on my pillow
for a gibbous moon, a philosopher’s
finger on his cock,
and a comfortable grumble of the sea.

(From: Allen Curnow (1979) An Incorrigible Music: A sequence of poems. Auckland University Press.)

This poem is the place I first encountered the word ‘gibbous’. It comes up so often in the poem I had to go look it up.

Rereading the poem now, I notice that there’s an Easter element to it that I had completely blanked out. My interest in it has always been in the reference to Descartes and the Cartesian thesis that we are essentially thinking beings and that we know the mind with more certainty than we know the body. The poem is a little childish really. It’s easy to ridicule someone by picturing them going to the toilet. It could be a form of ad hominem, perhaps, suggesting that if Descartes had been elderly and more taken up with the physical necessities of going to the toilet, he might have not have postulated a fundamental and unbridgeable gulf between the mental and the physical sides of a person.

It is completely question-begging, of course. If Descartes is right, and we are disembodied minds, it would be entirely possible for the mind to undergo a series of experiences exactly as if of getting up in the middle of the night to go outside and piss, so the fact that Curnow experiences that does nothing to show the metaphysical picture developed by Descartes is false. But I can’t help agreeing with Curnow that, psychologically speaking, he probably wouldn’t have had the thoughts he had, if his body had been more unreliable. Women have messy bodies. Could a woman have given us the Cartesian meditations? (Of course, the idea is pretty improbable to begin with. The chances of anyone coming up with it is pretty hard to define but surely vanishingly small. So it probably doesn’t make it substantially less likely to have come from a woman).

Advertisement

Is it good to talk?

Dot writes: on Saturday night, at a party at our neighbours’ house, I found myself having a thought-provoking conversation with a total stranger about her miscarriages. She was a fairly elderly lady and volunteered the topic quite freely. We were talking about ‘trying for a girl’ and the high chance of having more of the same. I mentioned that the two little boys romping about the house were mine, and she pointed to two middle-aged men as hers and said she had three boys but would have had seven children if she hadn’t had lost four in pregnancy. One, apparently, was quite far along and would have been a little girl. In those days, she said, one didn’t particularly talk about miscarriage; one just got on with it. I asked if that made it harder or easier. Easier, she said.

I found this response simultaneously surprising and comprehensible. On the one hand, it is very much the standard view at present that one should be able to talk about distressing events. Pain expressed is pain relieved; concealment is associated with shame. One should not be alone with grief. On the other hand, talking about events like miscarriage brings them under the purview of social expectation. I’ve recently read Arlie Russell Hochschild’s The Managed Heart, which makes the point that in many circumstances we feel there are certain emotions we ought to have and to express but we often find ourselves failing to measure up. Thus we often try to generate the right emotions to fit the circumstances, especially when others are watching. Perhaps if you don’t talk about pain or grief you also don’t have to do it right: you don’t have to deal with external expectations about how you should be and behave. We have a much more elaborated discourse about experiences like miscarriage now – this particular event is much more visible and discussed than it was a generation ago – and while that should promote compassion and consideration towards others who suffer such misfortune, which is a good thing, it must also produce a certain pressure on the sufferers.

Another dimension that occurs to me has to do more with the cognitive view of emotion that I’ve been reading about in Martha Nussbaum’s Upheavals of Thought: The Intelligence of Emotions. Nussbaum argues that emotions are essentially judgements of value. This position starts to make a lot of sense as she elaborates it. Judgements are not necessarily reducible to linguistically formulable propositions (NUssbaum’s position rests on a broad view of cognition not confined to the cool and explicit: non-linguistic animals and tiny children are more than capable of it); the enormous urgency of some emotions is explicable in that they are ‘eudaimonistic’, which is to say that they are centered on what we feel is necessary to our own flourishing. Emotions enable us to perceive the salience of events to our selves and to relate to the world. They sometimes seem illogical or to come on us against our conscious judgements, but this is explicable in terms of the early formation of our emotional lives in childhood: the judgements embedded in our emotions are not always the ones the adult voice in our head would express. Emotions thus have an unconscious or subterranean dimension that can seem out of our control. Nonetheless, Nussbaum’s view can intersect with Arlie Russell Hochschild’s in that it is clear the ‘felt’ judgements of the heart can interact with the views of objects and events consciously adopted. One can thus see how it is possible to learn to alter one’s emotional value-judgements top-down, by thinking the thoughts that would translate into a particular feeling. I think this is also relevant to the question of the ‘talking cure’. On the one hand, pain can be relieved by being spoken of. This is a way of bringing it under control and also of separating oneself from it; what has been spoken can be contemplated: it is no longer of the core of one’s self. But on the other hand, supposing the internal value judgement were not quite as one might expect to formulate it, the work of formulating it might create an emotion that was not wholly there before, or bring forth a dissonance that was itself a source of pain. And insofar as depressed or sorrowful states are characterised by recurring conscious negative thought, it may not always be so helpful to encourage them by talking about them.

funny old fruit

Ken writes:

Did you know that if you take an apple and plant the pips you’ll get a tree, but not one of the same variety as you planted. You won’t get a granny smith or a braeburn or whatever. You’ll get a new variety unique to your tree and it’s apparently very unlikely to be any good at all. It will also grow into a massive tree. Apple trees of recognised varieties that you buy from the nursery are grafted onto different rootstocks that restrict to greater or lesser degree how high the tree will go.

If you buy a tree from a nursery, what you get will be a chimera. The sortal part will have come from a cutting, and the rootstock, I presume, likewise (only cut from a root?).

Isn’t this a curious state of affairs? It makes you wonder what sort of thing a variety of apples is? Philosophers differ of course, but I’ve always been attracted to the Platonist conception of sorts or types of things that sees them as abstract or higher order objects that stand over the various individual objects or entities of that kind. There are many individual Mini Cooper cars, for example, and then there is also the type MINI COOPER, the abstract thing whose form all those individual cars share. I would have said the same about apples, but I don’t think that’s quite right.

Suppose someone cloned me. Then there could in principle be an army of men with my genetic make-up. These men would have a common form, and be of the same sort or type. Even without the clones, there is nothing to stop us thinking of Ken the man and KEN the abstract form (the essence of Ken); only given there are no clones, this is a type that has only one instance.

How, though, is the original Granny Smith tree related to the thousands of Granny Smith trees in commercial orchards around the world? It’s not as mother to daughter, because daughter apples would all be different varieties (an tree grown from a pip is not at all like the apple it came from). Rather, the commercial orchards came from cuttings (with the same genetic make-up) and from cuttings of cuttings and so on. Are these clones? I don’t think so. Clones are related to their original like children to a single parent. This is more a case of say, chopping a hand off and the hand growing into an identical person (attached to a nutrient supply though the rootstock).

It seems to me there’s at least as good a reason as not to regard the variety, not any individual tree, but the variety, as a strange sort individual object with discontiguous parts spread out through time and space. Most things have parts that are attached to them but some, like the United States, have geographically separated parts. An apple variety may be like the USA but having parts that are separated in time as well as space. The United States may seem like a strange sort of object, to be sure, but it’s not an abstract object in the same sense as the essence of Ken or MINI COOPER is. Abstract objects, at least according to the Platonist conception, cannot perish and don’t have a location in time and space. (An individual car may be parked outside my house, but the form of the car isn’t there, it’s wherever there are cars). Individual apple trees like the ones I planted today may perhaps be parts of a larger individual.

Why should we see it this way? Well, go back to the original Granny Smith tree. Suppose we take a cutting off it. Now we have two genetically identical pieces of tree material (albeit one of them is bigger and still rooted to the ground). By what right could we say that the rooted tree is the same tree and the cut branch is not the same tree anymore? (Well, we can call it what we like, but how does the nature of the case determine what is the same tree and what is not). Isn’t it arbitrary? Does size matter? Does having roots matter? The cutting can grow its own roots if we put it in water. Taking the cutting may kill the branch, but it may likewise kill the tree or both. The leaves on the end of the any of the branches haven’t changed. They don’t care where the nutrients come from as long as they keep coming up down the line.

We are dealing with a case of fission: Object A splits into B and C. If the nature of the case doesn’t give us a decisive reason for exclusively identifying either B or C with A, then either A perished, or both B and C are (detached proper parts of) A. It would be absurd for size to be the determining factor of identity. And whether roots are present or not is relevant only for the future survival of the part, not its present identity. (Being instantaneously deprived of its roots is an existential crisis of first order for the plant, but why should it affect what plant it is?)

Perhaps one reason to favour the part/whole or complex individual conception of apple varieties would be to think how we’d describe it if all the examples of one variety suddenly ceased to exist. Would we say that the variety ceased to exist too, which favours taking the variety as a whole composed of its detached parts, or would we say it existed all right, only if had no exemplars anymore, which favours the Platonist position? I think we’d say it became extinct, which is to say that it ceased to exist.

57

Ken writes:

I’ve just realised that, as well as being the number of our house, and the number of varieties of Heinz sauce, it is also central to the example at the heart of Kripke’s argument for scepticism about meaning and reference.* This last being what I spent my post-doc at University College Dublin investigating.

There are infinitely many numbers, but obviously no one has calculated with all of them. So there’s bound to be a sum involving numbers no one has ever actually used before. For concreteness, says Kripke, let’s suppose the sum is 57+68 = ?
and no one has calculated as high as 57 before. Most people, we assume would answer `125′.

Next, he says, imagine a bizarre sceptic challenges you: “how can you be so sure that answering `125′ doesn’t represent a change in your previous usage? Perhaps as you used the terms in the past you should now answer `5′.”

The idea is that perhaps `+’ stood for a function that agrees with addition over precisely the range of calculations we did in the past, but differs regarding the answer required in the case at issue. It’s called a `bent’ rule because it is aligned with the rule of addition for all previously considered cases but kinks off, or diverges, from what addition requires for the case at hand. If `+’ stood for a bent rule, then using `+’ as we used it in the past would now require us to answer not `125′ but `5′.

Now, it’s clearly an absurd scenario. It just defies credibility to think that we should answer `5′ if we’re to answer in accordance with how we used `+’ in the past. Why would we be following a bent rule? But as Kripke says, if we’re not following a bent rule, there must be something about us that rules it out. Something about our past behaviour, or mental lives, or social and physical environment (e.g. a linguistic community), or our dispositions to answer related questions in the future, something must rule the bent possibility out.

But does anything rule it out? By hypothesis no one thought about numbers as high as 57 before. They weren’t in our minds’ eyes when we learned how to calculate, so how could what we learned have dictated one sort of response over others? Well, Kripke admits, we did learn a procedure for addition (e.g. count out x many marbles. count out y many marbles. put the piles together and count out the combined pile), but the terms in that procedure could themselves have been bent, so that `count’ really stands for the practice of enumerating objects one after another until you get to 57 objects after which the tally is always `5′. We may have learned a procedure for addition, but if that procedure involved bent rules, then a bent solution would be demanded for the question `57+68=?’.

Kripke goes through a number of possible candidates for a fact that might rule out bent interpretations, but he rejects them all. For instance, other speakers in our the linguistic community can’t come to the rescue, because if there’s no fact of the matter about what an individual means by a word, there can be no fact of the matter about what an aggregation of individuals means.

The one promising candidate for a fact about an individual that would show they weren’t following a bent rule is the fact that most people are disposed to answer `125′ not `5′. This is where many philosophers think the sceptic’s challenge can be answered. But Kripke makes a very strong prima facie case that dispositions will not answer the sceptic. The first thing he notes is that it is not in fact true that given any addition problem, most people are disposed to answer in accordance with the addition function. A lot of people make mistakes. It is very hard to manually add up a column of figures correctly. If you use a calculator, you are literally using a bent rule, because when the numbers get high enough, the calculator responds with `E’ (or `ERROR’ or some other output). The calculator follows addition some of the way and then diverges when the numbers are too large. Some numbers are infinitely big. You would literally die before finishing hearing them. Humans don’t have dispositions to answer in accordance with addition (as opposed to alternative bent functions).

One strategy to overcome this, which I think fails, is to consider our dispositions as complex products of dispositions to answer in accordance with addition and dispositions to deviate from addition in certain conditions. For example, when tired we have a disposition to misread a column of figures and misalign numbers so as to put the calculation off. If these dispositions to error can be factored out of our actual dispositions, then we can site our disposition to add as the fact about us that shows we weren’t following the bent rule.

The trouble is that the factoring out has to be more than a merely nominal one. After all, our actual dispositions can in principle be factored into a disposition to follow a bent rule and auxiliary dispositions to err etc, as well. We need to go further and say that whereas we only nominally have a disposition to follow the bent rule, we really have a disposition to follow a true one. I don’t believe we can do this. If there were some way to independently characterise the sort of dispositions bending our behaviour away from a true disposition to add, then we could say we really had that disposition to add. But the characterisation of auxiliary dispositions is always itself susceptible to sceptical reinterpretation and nominal factorisation into different bent dispositions. So the independent characterisation isn’t to be had.

The sceptical conclusion of the preceding train of thought is that there is no fact that shows I mean addition by `+’ (and should answer 125). The argument generalises to all words, so the final conclusion is that there is no fact of the matter as to what we mean by any word.

I just think that’s a blast. It pleases me that I have a little reminder of this argument in my house number. I thought I’d share the argument with you to see if it stimulates any thoughts.

*Saul Kripke is an American Philosopher and widely regarded as one of the best philosopher’s of the 20th century (in fact he’s still alive, so possibly the 21st century too). He presents the argument as an interpretation of Ludwig Wittenstein’s work, but most philosohers don’t accept the attribution.

Some more thoughts on Behaviourism

Ken writes:

I’ve been trying for some time to write a post on Behavourism because it’s something that holds a perennial fascination for me. I have a number of B F Skinner’s books on the go About Behaviourism, Science and Human Behaviour and Verbal Behavior. This last is the book Noam Chomsky made his name reviewing. It’s very heavy going, but it is brimming with interesting ideas (although a proper behaviourist would never put it in those terms). Anyway, I twisted my ankle this weekend and it can’t really bear my weight, so Alice has cleared her day at work to look after the kids so I can sit with my leg up and blog.

Most people have only a very vague and possibly caricatured notion of what Behaviourism is. I remember a joke about behaviourism doing the rounds when I was at university. What did one behaviourist say to the other after sex? “That was good for you. How was it for me?” Implying among other things, that behaviourism has no account of first-person experience or introspective self-knowledge. In fact, Behaviourism doesn’t deny the reality of feelings and experiences although it does challenge the traditional account of how we should understand them and their place in psychology.

Behaviourism seeks to understand an organism’s behaviour in terms of a history of differential reinforcement or punishment that shaped that behaviour. Behaviour assumes its specific form because in the past certain forms of behaviour were rewarded and certain other forms were punished. For example, a tennis player’s serve takes the form it does because tossing the ball so high and swinging so hard cleared the net and landed in the square in the past, whereas tossing it slightly differently and swinging slightly differently either didn’t clear the net or didn’t land in the square. Those movements of the body that resulted in a legal serve were selected by their good consequences for the player and reproduced in subsequent behaviour. It is the environment that determines whether the consequences are good for the player. If the rules of tennis were different, and a different trajectory of the ball were highly prized, a different combination of movements would be selected. This is a different sort of explanation from the one that goes ‘because that’s the way the player wants to hit it’, but, the behaviourist argues, it’s ultimately more satisfying.

In the usual run of things, people explain something like the way a tennis player serves the ball in terms of the beliefs and thoughts and attitudes of the player. An explanation that appeals to the player’s intentions is a regress stopper. It gives the explanatory process a stopping point. You can but you don’t have to ask why the player chose to play the shot that way, because human agents are taken to be capable of spontaneous action (that is, genuinely initiated action, action without a previous cause).

If I understand him correctly, Skinner denies that human agents do truly behave spontaneously. He thinks that all behaviour has a determinate cause (or allowing for irreduciably probabilistic causal connections in atomic domains, all behaviour is made highly probable by some antecendent event). Human actions are never the result of spontaneous (i.e. uncaused) decisions by human agents. So behaviourism is deterministic. Determinism is a hard doctrine to counternance, because it implies that human actions are never free. They are the ultimate consequences of chains of causal interactions beginning aeons and aeons ago in the distant past. If each link in the chain is causally sufficient to bring about the next, then our actions were already going to happen before we were even born.

I happen to think determinism is more plausible than the alternative, which holds that at some point there was a gap in the causal order that required the intervention of a non-physical causal agent to overcome. All my actions, whatever their wider meaning might be, are at one level the movements of the parts of my body (nerves and muscles and that sort of thing). It is incredible to me that these physical body parts become activated without some other physical cause.

Behaviourism implies determinism but not vice versa. You don’t have to be a behaviourist if you’re already prepared to be a determinist. Most philosophers and cognitive scientists would be determinists but not behaviourists. They think the common sense psychological explanation of human action in terms of beliefs and desires and that sort of thing is actually compatible with determinism because the mind is somehow just another piece of the still not very well understood physical causal story. Behaviourism is radical in seeking to reject the explanation of behaviour in terms of these familiar categories (of belief, desire and so on).

I think the strongest reason to take this sort of explanation seriously comes from the parallel between behaviourism and the theory of evolution. Both theories see current forms of phenomena as the result of historical processes of selection. Behaviour takes its current form because in the past behaviour it resembles was reinforced and dissimilar behaviour was not. The organism takes its current form because in the past organisms with that form survived while those without it did not. The mechanisms of shaping and selecting phenomena are different, but complementary. Natural selection works by a sort of winnowing out of unsuccessful forms, whereas behavioural selection increases the probability of reinforced forms. A certain trait has a survival value so subsequent generations of organism that inherit that trait are created and over time the trait spreads through the population as more and more of the subsequent generations have it. In behaviour modification, actions that were reinforced are simply repeated which happens at the expense of untried forms that otherwise might have been emitted. Behaviour modification relies on the survival value of being disposed to find certain things rewarding (the attention of other people, smiles, physical affection, certain tastes and so on). These dispositions ensure that certain actions are rewarding so they are repeated when similar circumstances recur.

Evolutionary explanation and the explanation of behaviour in terms of a history of reinforcement both do away with explanation in terms of the decisions of an agent. Evolutionary theory has rid us of the need to see the variety of species of animals as the result of the creative decisions of God, and behaviourism would rid us of the little god in the human mind for explaining human behaviour. (This reminds me of another joke about behaviourism. The American philosopher Sydney Morgenbesser, on having behaviourism explained to him, apparently said, ‘So what you’re saying is: Don’t anthropomorphize people!’ That’s astute. It’s absolutely spot on, but it’s not a criticism of behaviourism. Behaviourism is a rival to traditional belief-desire psychology.

It can sometimes seem as if the behaviourist explanation presupposes the kind of thoughts and feelings and inner goings-on that the behaviourist officially disavows. For what is rewarding but positive feelings of satisfaction? Chocolate is rewarding because the boy likes the taste. Praise is rewarding because the boy wants to feel loved and valued. And so on. Skinner’s answer to this criticism is that the feelings, while genuine, aren’t needed to explain the mechanisms of the reinforcement of behaviour. He appeals instead to the survival value of dispositions to find certain things rewarding. Organisms with these dispositions were able to learn to respond to the environment in effective ways, and this conveyed a selective advantage (“About Behaviourism”, p.52). It is a corruption of the behaviourist explanation of things to say the agent is motivated to behave in such and such a way because they like the feeling the reward brings. That is smuggling a homunculus into the explanation that doesn’t need to be there.

Its easy to see the principles of classical and operant conditioning at work in utterances Frank makes. For example, sometimes he says Good Boy to himself (he actually says ‘Bo Boy’, but that’s his version of Good Boy). A behaviouristic explanation of this might go something like this. He does things that we reinforce by saying ‘Good Boy’. He is disposed to find our warm smiles and attention rewarding and purely by association is conditioned to find ‘good boy’ itself rewarding. So it is slightly rewarding when he says it to himself. So he does. So there’s an explanation of how the behaviour came about and continues, but if we scale this sort of thing up to adults, we seem to lose hold of a distinction between learning and proficient stages. I mean, you can see how someone could acquire behaviours by a training process of reinforcement, but when does it end. Is the behaviour of a competent tennis player still under the control of the reinforcing environment? Does the professional still play the shots as they do for reward and reinforcement? I don’t know what the behaviourist would say about that, but I suspect that the answer would be that there is no genuine distinction between learning and proficient behaviour. Learning never ends. Proficient behaviour is highly effective at getting its reward. So effective it doesn’t feel like learning anymore, but it is still operating according the the same principles. Or maybe the distinction needs to be recast in different terms. Perhaps proficient behaviour is rewarded on a more intermittant schedule and perhaps it is rewarded by other sorts of reinforcers than the ones it was trained up with.

In spite of the unsettling consequences, I think behaviourism must be true of human behaviour. For on the one hand, it is easy to see how it applies to some aspects of human behaviour, and on the other hand it’s hard to imagine circumstances that violate the rules of behaviour actually happening. If you get rewarded for doing something, you’re more likely to do it again in the future. Things that don’t pay, people tend to stop doing. But now what about the exceptions? Well, are there any? If I do something that is rewarding, e.g. well remunerative, what would interfere to make me unlikely to do it again in the future? The contingencies of reinforcement might change to make it no longer pay, or alternatively, I could find some incompatible actions more rewarding (for example, a woman who is a successful lawyer might leave her job to be a full-time mother because she finds it more rewarding). In general, it is hard to conceive of a case where someone finds something rewarding but doesn’t do it (unless they find an incompatible action even more rewarding. Actions are ‘incompatible’ if and only if you cannot perform both). So what would make behaviourist principles stop working?

Uncharacteristically philosophical post from Dot

Dot writes: this is trespassing on Ken’s territory: possibly not philosophy, but getting there. (I’m not sure where philosophy starts but Ken has a strong sense of where its borders are and when people have failed to cross them. “That was quite an interesting paper,” he’ll say, “but it wasn’t philosophy.”)

I’ve been doing more reading on shame; also on anger, medieval vengeance and feud, but we’ll leave those aside for the moment. Some of the modern literature on shame leaves very little space for shame to have a positive function, but others emphasise that shame is a pretty much inevitable part of emotional development and, as part of how we regulate our attachments to others, a necessary one.

Shame is a major aspect of the human condition. It serves a fundamental purpose, enabling human beings to monitor their own behavior in relation to others… Without both shame and laughter, complex social life would be impossible.
– Suzanne Retzinger, ‘Resentment and Laughter: Video Studies of the Shame-Rage Spiral’, in The Role of Shame in Symptom Formation, ed. Helen Block Lewis (Hillsdale, NJ, 1987), p. 178

Shame in this and other work influenced by Lewis is conceived as related to the ego-ideal and a failure to match up to it: thus shame means a focus on the self (“I am an inadequate person”) rather than, as in guilt, a focus on the act (“I did a bad thing”). So being ashamed and resolving to change would involve thinking something like “I need to stop being that sort of person” rather than “I need to stop doing x.” But it occurred to me to wonder: how could you stop being that sort of person except by stopping doing x? To what extent can you distinguish the person you are from the things you do?

I can conceive of someone arguing that who you are coincides precisely with what you do. But here’s a counter-example, involving something that’s shameful rather than wrong. You are ashamed of being fat and you want to stop being fat. Now, in order to stop being fat you need to stop eating cream cakes (and also start taking more exercise). But eating a cream cake, or even repeatedly eating cream cakes, is not in itself the same as being fat. Some very thin people pack a remarkable number of them away; similarly some unfortunate fat people haven’t touched them for years. There are also several different ways of tackling being fat: one would be drastic liposuction. It seems to me that this is an example where being a certain sort of person cannot be straightforwardly equated with doing certain sorts of things, and where shame at being (for example) fat cannot be mapped exactly onto guilt for (for example) eating cream cakes. The self and its actions are closely related but distinct.

(Another example: one can be a murderous psychopath while doing art therapy in Broadmoor.)

Ever-present metonymy?

Ken writes:

I think natural language semantics is a bit of a fool’s errand, at least as it is conceived and practiced throughout most of the world’s linguistics and philosophy departments. One encounters a myriad of little proofs of this daily if one keeps one’s eyes open.

Behold this bottle of squeezy pure clear honey.

I’m pretty certain that by ‘squeezy honey’ the manufacturer intends ‘honey in a squeezy bottle’. I am reasonably certain that by ‘pure honey’ and ‘clear honey’ they do not mean ‘honey in a pure bottle’ and ‘honey in a clear bottle’ respectively, but honey that is pure and clear. They don’t mean colourless by ‘clear’ but simply not turbid or cloudy.

No one would have any difficulty understanding the phrase ‘squeezy pure clear honey’ appearing as it does, in context, on the label of the bottle to mean pure clear honey in a squeezy bottle. Everything is just as it says on the tin. But isn’t it interesting how the adjectives seem to modify the noun ever so slightly differently? It isn’t the honey that is squeezy but the bottle of honey although it is the honey which is pure and clear.

It isn’t true in general that ‘squeezy X’ means ‘X in a squeezy bottle’, because that doesn’t work for X = ‘bottle’ (or ‘ball’ or ‘cushion’ or ‘toy’ or various other things).

We seem here to be dealing with a case where the precise meaning of ‘squeezy pure clear honey’ is fixed by context in which it is used. The mere fact that it is written on the label guides our interpretation. If we had encountered the phrase out of context, it might not have made any sense to us. This sort of thing, I think, makes it hard to credit the prevailing theories of meaning current in philosophy and linguistics departments that seek to associate general and context independent rules with linguistic expressions.

Moral emotions

Dot writes: I’ve just read an article on the ‘hostility triad’ of moral emotions and their relationship to different types of moral codes (I’m having a research day – oh happy day…): Paul Rozin et al., ‘The CAD Trial Hypothesis: A Mapping Between Three Moral Emotions (Contempt, Anger, Disgust) and Three Moral Codes (Community, Autonomy, Divinity)’, Journal of Personality and Social Psychology 76:4 (1999), 574-86. I’m not quite sure how it’s going to contribute to the paper I’m working on, which is on shame and anger in saints’ lives, but I found it generally thought-provoking; also there were some great pictures:
faces (p. 579)

The hypothesis tested is that the three emotions, contempt, anger and disgust, are typically elicited by offences against the three moral codes that conveniently alliterate with them: Community (‘in these cases an action is wrong because a person fails to carry out his or her duties within the community, or to the social hierarchy in the community’ p. 575), Autonomy (‘in these cases an action is wrong because it directly hurts another person, or infringes upon his/her rights or freedoms as an individual’ p. 575), and Divinity (‘In these cases a person disrespects the sacredness of God, or causes impurity or degradation to himself/herself, or to others’ p. 576). The codes can and do co-exist within societies (they were all derived from field-work in India) but they are emphasised to different extents. The research for the article was conducted in both the US and Japan. College students were asked to match scenarios, which had been selected to correspond to the three moral codes, with either facial expressions (one of the sets of expressions is reproduced in the pdf linked above) or with the terms ‘contempt’, ‘anger’ or ‘disgust’. In another experiment they were asked to assess which scenarios went with which code. The results very broadly supported the hypothesis, though there were a number of problems. In particular, when participants were asked to choose an appropriate reaction to the community-violations there was a modest trend towards the contempt face but fewer people chose the term ‘contempt’; there was also plenty of scope for people to assess the scenarios as belonging to violations of a different code from the one the researchers had in mind. Anyway, it was all broadly promising. Given that the texts I am working on are religious texts, I can perhaps bear in mind the idea of disgust and the Divinity code, since offence against God and the sacred are more than slightly important in saints’ lives.

The point that most struck me in the article was a basic one, made in the opening section. We tend to associate morality with rationality and oppose rationality to emotion, seeing children’s moral development as going hand-in-hand with their ability to ‘respect a kind of moral logic (e.g., “If I were in her position I would not like this, therefore I should not do this”)’ (p. 574). But

Authors in a variety of fields have begun to argue that emotions are themselves a kind of perception or rationality…; that emotions are embodied thoughts…; and that “beneath the extraordinary variety of surface behavior and consciously articulated ideals, there is a set of emotional states that form the bases for a limited number of universal moral categories that transcend time and locality” (Kagan, 1984, p. 118…) Cross-cultural work has begun to demonstrate that cognitive-developmental theories work less well outside of Western middle-class populations and that emotional reactions are often the best predictions of moral judgments… (p. 574)

My first thought here was that this reflects very interestingly on the old cliche of male rationality / female emotionality. There are other things going on too – I’m interested in the coupling of ‘Western’ to ‘middle-class’ here (and I think of the whipping up of emotion in the form of moral panic by the gutter press and how disgusting – ha! – I find it). It’s good to be reminded that emotion always has a cognitive dimension, too: our emotions arise at some level from our appraisals of situations, though maybe not consciously. Extremely interesting stuff…

Live from New York

Ken writes:

I’m in New York at the moment mixing business with pleasure meeting old friends from gradschool and attending a philosophy conference at the Marriott Marquis in the middle of Times Square (I’m not actually staying on site. I’m staying at a friend’s friend’s apartment on the Upper West Side).

It is a rather bittersweet occasion. For along with the delights of the city and the joys of seeing friends again is the realisation that it marks the ending of my career in philosophy. Academic jobs in philosophy have almost completely dried up. In the UK and Ireland, it is because governments have had to cut public sector spending on luxuries like education, and in the US, it is because US universities lost so much of their endowments when the stock markets and henge-funds collapsed in 2008. As an indication of how many PhDs are chasing how few available jobs, it is interesting to note that Boise State University received 588 applications for its recently advertised position (I know because I was one of the many unsuccessful applicants). BSU is not a prestigious institution and a successful applicant could not expect to have access to an extensive library collection or to teach gifted and motivated students. (point of comparison: UCD and TCD recently both had more than 150 applications for their latest posts advertised in 2008. Therefore, the market worsened precipitously just as my fixed term funded post-doc came to an end).

I’m trying to be philosophical about it. I do find it easier to take when I reflect on the aspects of my life that I’ve done right, principally Dot and the boys. In my disappointment I detect two strains; one proper and one shameful. It is proper to be disappointed that I can’t get a permanent post in philosophy given that I’ve invested a lot of my self in it almost continuously since 1993. It will be difficult to change direction and I worry that I won’t be able to convert all that momentum into anything remunerative or equally enjoyable. I’ve stayed in philosophy and academia this long because I find it intensely stimulating and now I will have to do without that source of intellectual excitement. The shameful reason for disappointment is that my cohort at school haven’t failed in the same way. At least five of my very close circle of friends from that time have PhDs and successful careers as academics (as far as I know the other philosopher in my circle dropped out of her PhD programme, but honorably, and years ago. She didn’t hang on in grimly ’til the last desperate end as I seem to have done).

I don’t know what I’m going to do in the future. But I can say with conviction that I’m not going to take myself or my career too seriously anytime soon. I’m going to chill out and enjoy time with my family and work only to pay the bills (which at the moment Dot is paying). I’ve got to bring myself really to accept that I cannot have it all. I cannot have the ideally perfect life so I must be content with what I do have. This may seem obvious, but it will be difficult for me. I’ve been caught up in myself for so long.

Paradox

Ken writes:

If God does not exist, then it’s not the case that if I pray my prayers will be answered. I do not pray. Therefore God exists.

I came across this today in discussion of the paradoxes of material implication. The interpretation of the conditional ‘if I pray, my prayers will be answered’ is at issue. Is it equivalent to ‘either I won’t pray or my prayers will be answered’ or is there a more intimately conditional meaning? If the conditional is treated strictly conservatively and in accordance with classical logic, then the argument is valid. So the soundness turns on whether the premise is true. It looks plausible though, doesn’t it?