Pages

Thursday, July 16, 2015

The Emotional Dog Does Learn New Tricks: A Reply to Pizarro and Bloom

In my last post, I discussed criticisms of Jonathan Haidt's Social Intuitionist Model (SIM) of moral judgment.  Haidt sees Pizarro and Bloom as making two important claims: (a) Fast and automatic moral intuitions are shaped and informed by prior reasoning, and (b) people actively engage in reasoning when faced with real world moral dilemmas.

Haidt begins his response to these claims by making two clarifications.  First, as he sees it, intuitionism allows for a great deal of malleability and responsiveness to new information and circumstances.  So although it's true that moral judgments can be altered as a result of differences in cognitive appraisal, this is consistent with the SIM.  Differences in belief as a result of reappraisal will lead to the activation of different intuitions.  This is unremarkable.  If I believe someone is a Nazi, I will likely be unfriendly with them.  If I find out they were forced at gunpoint to become a Nazi, my attitudes will likely change.  More importantly, however, there's a wealth of empirical research that suggests that people don't spontaneously engage in the kinds of reappraisals that Pizarro and Bloom say are so important.  When people do engage in these reappraisals, it's most often the result of social interaction, a key prediction of the SIM.

Second, Haidt fully admits that people may occasionally engage in deep moral reflection, though these occasions are few and far between.  As noted in the previous post, people may selectively craft their situation so as to experience (or not experience) certain emotions or intuitions.  Haidt again doubts how often this happens.  One of Haidt's prescient questions reveals the improbability of this occurrence: "Does it ever happen that a person has a gut feeling of a liberal but a second-order desire to become a conservative?  If so, the person could set out on a several-year program of befriending nice, articulate, and attractive conservatives."  That this picture is ridiculous is evidence in favor of Haidt's model of moral judgment.

Haidt ends by remarking on the scope of moral reasoning.  Pizarro and Bloom contend that moral judgments in everyday life are uniquely deliberative.  Laboratory studies don't capture these kinds of judgments because those made in the lab are detached from any real consequences.  In real life, they say, people make tough decisions about how to treat people equitably or about abortion.  Haidt doesn't disagree that people may be intensely conflicted by these decisions, but he doubts that the people involved actively search for arguments on both sides, weigh the strength of the arguments, and act on the logical entailments of those arguments.  Instead, Haidt characterizes these internal conflicts as conflicts between competing intuitions.  A woman thinking about having an abortion doesn't read Judith Jarvis Thomson or Don Marquis.  Instead, she thinks about her future life, about her social commitments to the father and to her parents, and about the life of her child.  Each of these intuitions tug on her emotions.  The ultimate outcome is the result of the strongest intuitions experienced.

Equally important, Haidt doubts that these tough decisions happen all that often.  He asks the reader to take a mental tally of how often in the past year they've agonized over a moral issue and to compare that to all the times the reader has made a moral judgment after reading a newspaper, participating in gossip, or driving on roads surrounded by drivers less competent than oneself.  Further, even in cases that introspectively appear to be the result of deliberative reasoning, we should be cautious not to deceive ourselves.  Other research has demonstrated that we have a tendency to make up plausible post hoc rationalizations to justify our decisions.

So, despite a fair bit of criticism, the SIM remains consistent with what we observe in the lab and in real life.  People's moral judgments appear to be best characterized as being driven by intuitions and emotions.  Reasoned deliberation is the press secretary, the lawyer, the yes-man that justifies judgments that have already been made.  It's only in rare circumstances that reason plays anything more than a supporting role in the judgment of morality.


Haidt, J. (2003). The emotional dog does learn new tricks: A reply to Pizarro and Bloom (2003).

The Intelligence of Moral Intuitions: Comment on Haidt (2001)

The mark of a good scientist is their willingness to expose themselves to criticism.  Only by listening to the dissenting views of others can we escape our own prejudices and cognitive limitations.  This is particularly important for the views we find most certain.  There's sometimes a tendency to pay lip service to the free expression of ideas when talking about what to eat for dinner or what music sounds best, but when it comes to the ideas we're most committed to, we put up our barriers and shut down the conversation.

In my own investigations, I try to avoid this as much as possible.  I try to seek out criticism.  For instance, in several of my previous posts, I've discussed the Social Intuitionist Model (SIM), a model which I think is approximately correct.  Moral judgments are predominantly caused by our automatic intuitions, and reasoned deliberation takes a back seat.  However, my posts have not been uniformly uncritical.  It's in this vein that I'd like to discuss another criticism of Haidt's model of moral judgment.

In response to Haidt, Pizarro and Bloom point out situations in which people do deliberate about what moral actions to take.  They suggest several approaches by which this might occur.  First, people may deliberately change their appraisal of a situation.  Imagine, for instance, being told that you will watch a gory factory accident.  You have the ability to choose to take a detached analytical mindset or not.  If you do, your emotional reaction to the event will consequently be diminished.  The fact that this is possible suggests that our intuitions aren't simple on off switches.  Rather, deliberation may modulate the effect of our intuitive reactions.

A second way in which we can alter our moral judgments by the force of reason is by exerting control over the situations which we encounter.  Imagine a person on a diet throwing away all their junk food or a drug addict flushing their drugs.  It's true that if these people saw the enticing cues, their desires would flare up and be near implacable.  Nevertheless, the initial act of defiance is the product of careful deliberation.

Moreover, Pizarro and Bloom point out that many decisions are made in opposition to prevailing societal mores.  Examples include "righteous Gentiles" in Nazi Germany, children who insist on becoming vegetarians within nonvegetarian families, college professors who defend the abolistion of tenure, etc.  Because the SIM suggests that our moral judgments are often driven by a desire to fit in with the crowd, these exemplary decisions to buck the trend cast doubt upon Haidt's model.

Finally, Pizarro and Bloom suggest that some questions of morality cannot be answered by simple intuitions.  Haidt's stimuli consist of contrived hypothetical scenarios.  Real life, however, is filled with questions like: How much should I give to charity?  What is the proper balance of work and family?  What are my obligations to my friends?  These questions require careful deliberation, and they don't admit of quick intuitive responses.

For these reasons, Pizarro and Bloom defend the rationalist position.  Though some moral judgments are no doubt driven by intuitive responses, reason remains the dominant source of moral judgments in everyday life.

Normally, I would provide my comments at this point.  However, Haidt himself has responded directly to Pizarro and Bloom, a response which will be the subject of my next post.


Tuesday, July 7, 2015

Motive attribution asymmetry for love vs. hate drives intractable conflict

Scott Alexander over at Slate Star Codex is one of the best writers around today.  He not only offers a seemingly endless supply of wisdom and discernment about contemporary issues.  His short stories are equally full of remarkable insight.  For instance, one recent story includes a character, named Yellow, that swallows a yellow pill, allowing that character to read and search the mind of anyone they see.  Whereas many people might exult at finally finding out everybody's dirty little secret and perhaps turn to a life of crime as the world's most successful blackmailer, Yellow becomes a forest ranger after making a startling discovery:
People's minds are heartbreaking.  Not because people are so bad, but because they're so good.
Nobody is the villain of their own life story.  Everybody thinks of themselves as an honest guy or gal just trying to get by, constantly under assault by circumstances and The System and hundreds and hundreds of assholes. They don’t just sort of believe this. They really believe it. You almost believe it yourself, when you’re deep into a reading.
Yellow's instant comprehension of everybody's internal struggle makes social interaction a Herculean burden, so he removes himself from the incessant heartache to a little cabin in the woods.

This brings me to the paper I've read most recently.  Across five studies, researchers demonstrated what they call "motive attribution asymmetry."   When someone is talking about their in-group, engagement in conflict is considered to be motivated by love for the in-group.  When talking about the out-group, however, engagement in conflict is thought to be more motivated by hatred.  This pattern was found for several populations.  Democrats believed Democrats were motivated by love for Democrats.  And Republicans believed Republicans were motivated by love for Republicans.  However, Democrats believed Republicans were motivated by hate for Democrats.  And Republicans believed Democrats were motivated by hate for Republicans.
Israelis' attributions of both Israelis and Palestinians

Palestinians' attributions of both Israelis and Palestinians
The same pattern of love-for-us and hate-for-them was true of Israelis and Palestinians.  One exception, however, is that Palestinians attributed both love for Palestinians and hate for Israelis as important motives for Palestinians engaging in conflict.  Graphs of these patterns can be seen below. 

This motive attribution asymmetry was correlated with a raft of other beliefs and intentions about intractable conflict.  In an Israeli population, those who most exhibited this tendency were less willing to negotiate, believed that a win-win scenario was less likely, were less likely to vote for a peace deal, believed Palestinians were less likely to vote for a peace deal, and believed Palestinians were essentially unchangeable.

Unlike those of many other papers, the authors of this paper were not content with simply documenting yet another bias in the crowded menagerie of biases in the human mind.  They went further by exploring a potential corrective for this motive attribution asymmetry.  In one experiment, the researchers gave both Democrats and Republicans a monetary incentive to accurately attribute motives to the opposing party.  They found that people became remarkably more moderate when money was on the line.  People attributed love more and hate less as a motive for the out-group.

Despite their impressive empirical work, the researchers' theoretical explanations for these findings are a bit lackluster.  One potential mechanism, they say, is that people don't often observe out-group members expressing love.  Rather their observations of love are almost exclusively perceived among in-group members.  Hate, on the other hand, is quite evident among out-group members in situations of conflict.  Thus, the motive attribution asymmetry.  This explanation, however, is inconsistent with the fact that incentivized participants were able to more accurately attribute motives of love and hate.  If observations of one's in-group and out-group were the whole story, one wouldn't expect to find people changing their minds once money was on the line.

Another potential explanation for these findings, they say, is that people are engaging in motivated reasoning.  They are convincing themselves that the out-group lacks any prosocial values as a means of dehumanizing them and justifying harm towards them.  When presented with a monetary incentive, however, a new motive kicks in, the motive to be accurate.  This explanations seems much more plausible and reflects the wisdom of the maxim: "A bet is a tax on bullshit."  It's unclear, however, in what other circumstances people might become motivated to report the truth.  Money is a great incentive, but what about a safe and stable nation?  Why aren't Israelis, Democrats, and Republicans incentivized to resolve conflicts for the good of their nation?  Wouldn't interminable conflict be a tremendous cost to the nation? If so, then true patriots should want to end conflicts as efficiently as possible and be more incentivized to accurately perceive an opposing group's motives.  Further research on the conditions that change intergroup attributions would be a natural next step.  Research on interpersonal attributions would also be of interest.  Would a similar pattern emerge among individuals or is this effect a result of groupish norms?  Would this result be found among different cliques in high school?  How about in sports?  More research along these lines is necessary for exposing the exact mechanism for this phenomenon.

If people really knew why people engaged in conflict, there would be much less conflict.  Unfortunately, people aren't mind readers, not even close.  Instead, the results suggest people are systematically predisposed to misinterpret other people's minds.  It requires effort to actually figure out what other people are thinking, an effort that is all too often shirked in favor of scoring points for the in-group.  Although people pay lip service to resolving conflicts positively, that won't be possible until people are willing to take steps towards gaining an impartial understanding of their opposition's goals and desires.


Sunday, July 5, 2015

Moral Dumbfounding: When intuition finds no reason

In the past couple posts, I've  mentioned Jonathan Haidt's Social Intuitionist Model quite a bit.  Intuitions dominate our initial moral impressions and rather than correcting these intuitions, our faculty of reason makes these impressions stronger.  As Haidt puts it, "Reason is the press-secretary of the intuitions, and can pretend to no other office than that of ex-post facto spin doctor."  Perhaps the most intriguing support for this model is what Haidt calls "moral dumbfounding," which will be the subject of this post.

To test his model, Haidt had 30 college students answer questions about a series of dilemmas.  One was a dilemma well known in the history of moral psychology, the "Heinz dilemma."  Here's the dilemma in full.
In Europe, a woman was near death from a very bad disease, a special kind of cancer.  There was one drug that the doctors thought might save her. It was a form of radium for which a druggist was charging ten times what the drug cost him to make. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about half of what it cost. He told the druggist that his wife was dying, and asked him to sell it cheaper or let him pay later. But the druggist said, "No, I discovered the drug and I'm going to make money from it." So, Heinz got desperate and broke into the man's store to steal the drug for his wife. Was there anything wrong with what he did?
Given that this dilemma involves explicit tradeoffs between competing interests, Haidt predicted that people would easily engage in dispassionate moral reasoning.

As a comparison for the Heinz dilemma, participants were presented with two other dilemmas that didn't exhibit any obvious harm to any character.  The more famous of these stories is now known simply as "the Mark and Julie dilemma."
Julie and Mark, who are brother and sister are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach.  They decide that it would be interesting and fun if they tried making love. At very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?
Participants were also presented with a dilemma in which a medical student, Jennifer, eats a bit of a cadaver's flesh.  This cadaver was going to be incinerated the next day, so no one would miss it.  It was expected that participants would quickly and intuitively recognize the wrongness of Mark and Julie's actions as well as Jennifer's actions.  However, they would be at pains to verbally explain exactly why these characters had done something wrong.

Finally, participants were presented with two tasks that were expected to elicit strong intuitions, though they were non-moral in nature.  In one task, participants were asked to drink a glass of juice both before and after a sterilized cockroach had been dipped in it.  The other task involved the participants signing their name on a piece of paper.  On this paper, were the words "I, (participant's name), hereby sell my soul, after my death, to [the experimenter], for the sum of two dollars."  At the bottom of the page was a note that read: "This is NOT a legal or binding contract, in any way."  Despite the apparent harmlessness of each of these actions, participants were unsurprisingly uncomfortable with these tasks.

After the participant read each dilemma and gave their judgment, an experimenter would "argue" with the participants.  The experimenter would non-aggressively undermine whatever reason the participant put forth in support of their judgment.  For example, if a participant said that Mark and Julie did something wrong because they might have a deformed child, the experimenter would remind the participant that Julie was taking birth control pills and Mark used a condom.  If the participant said that what Heinz did was ok because his wife needed it to survive, the experimenter would ask if it would be ok to steal if a stranger needed it or if a beloved pet dog needed it.  The same procedure was followed for the two behavioral tasks the participants were asked to perform.  If a participant chose not to drink the cockroach-dipped-juice, the experimenter would remind them that the roach was sterile.  If a participant refused to sign the piece of paper, the experimenter would remind them that it wasn't a real contract.

While participants were going through the rigmarole of this experiment, Haidt was recording their verbal and nonverbal responses.  Afterwards, Haidt had participants fill out a self-report survey asking them how confident they were in their judgments, how confused they were, how irritated they were, how much they had changed their mind from their initial judgment, how much their judgment was based on gut feelings, and how much their judgment was based on reason.

Haidt found striking differences between the Heinz dilemma and the other dilemmas.  When responding to the Heinz dilemma, participants tended to provide reasons before announcing their judgment.  They reported that their judgments were based more on reason than on gut feelings.  Their judgments were relatively stable and held with high confidence.  And they rarely said that they couldn't explain their judgments.  The other dilemmas left participants in a much different mental state.  Participants reported being more confused and less confident in their judgments.  They relied more on their gut than on their reasoning; after gentle probing from the experimenter, participants dropped most of the arguments they put forward, and they frequently admitted that they couldn't find any reasons for their judgments.  This observation, in particular, is what Haidt calls "moral dumbfounding."  Participants maintained their judgments despite the inability to articulate their reasons.  They were, in a sense, struck speechless, or dumbfounded.  Furthermore, participants often verbally expressed their dumbfoundedness.  These verbal expressions were made 38 times in response to the incest story but only twice in response to the Heinz dilemma.  Participants' responses to the behavioral tasks provided a mixed bag of results but were similar to the Mark and Julie dilemma in several respects.   Participants were unconfident in their decision not to sign the piece of paper, and they didn't believe their decision was the result of rational deliberation.  And they often said they couldn't think of any reason, but they maintained their decision nonetheless.

Prior to this study, moral psychologists had almost exclusively presented participants with just the Heinz dilemma.  Consequently, psychologists inferred that moral judgment was largely the result of conscious effortful reasoning.  In hindsight, this conclusion seems obviously invalid.  This is like showing people pictures of cute cats, asking them how they feel, and inferring that humans only feel happiness.  It's odd that this was the dominant paradigm for decades.  Perhaps I'm missing something, but Haidt's insight seems like an obvious idea whose time had come.  It's now quite evident that, at least under certain circumstances, people may eschew reason in favor of their gut instincts.  Haidt's study doesn't warrant the conclusion that the majority of moral judgment is intuitively driven, but it does provide another brick in the wall of evidence for the Social Intuitionist Model.


Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript, University of Virginia.

Ideology, Motivated Reasoning, and Cognitive Reflection

Despite what many social psychologists claim, humans are mostly rational.  When we are thirsty, we drink water.  When're hungry, we eat food.  When we're on the edge of cliff about to fall off, we look for ways to save ourselves.  For the large majority of decisions we make everyday, humans typically take actions that will satisfy our goals.  In certain circumstances, however, we are systematically biased.  Take, for instance, the domain of politics.  It's understandable for there to be disputes over values: privacy vs security, economic freedom vs equality, etc.  What's less understandable, however, is the fact that people get heated up over facts.  Why do liberals believe that humans are the cause of climate change while conservatives don't?  Why do conservatives believe that gun control would increase crime while liberals believe the opposite?  Liberals will say it's because conservatives are biased, and conservatives will say it's because liberals are biased.  Who's correct?

To begin, let's catalogue the factors that may lead to ignorance of the empirical facts.  There are at least three.  One factor is that humans aren't always too thoughtful or deliberate.  Instead, we use heuristics, mental shortcuts for arriving at a desired outcome.  For instance, we rely on scientific experts to tell us the truth rather than seeking it out for ourselves.  Our choice of experts, however, is also often the result of nondeliberative thinking.  This unfortunate quirk of human decision making may lead people to become uninformed or misinformed.  A second factor is motivated reasoning.  That is, even if people engage their more reflective cognitive processes, they may do so in a way that steers them away from the truth.   When a person is motivated to maintain a relationship or preserve their identity, they may selectively interpret the evidence to suit their non-truth-attainment goals.  Finally, a person may have a certain kind of reasoning style that interferes with the attainment of truth.  There is a considerable body of research, for instance, that conservativism is associated with dogmatism, an aversion to complexity, and a craving for closure in an argument.  These cognitive traits may hinder a person in their pursuit of truth.

It's unclear, however, how these three factors interact to generate belief polarization.  In his article, Dan Kahan outlines three possibilities.  First is what he calls the "Bounded Rationality Position" (BRP).   According to BRP, our heuristic-driven reasoning is the most decisive factor in generating public discord over empirical matters.  On this view, laypeople inadequately engage in effortful information processing.  As a heuristic, then, these nondeliberative folk will tend to trust the received wisdom of their particular in-group, which in turn will lead to greater belief polarization.  A second alternative is what Kahan calls the "Ideological Asymmetry Position" (IAP).  IAP posits that right-wing ideology matters most in distorting empirical judgments.  Like BRP, IAP takes reasoning to be heuristic-driven and inadequately engaged.  This is said to be especially true of conservatives in light of previous correlative research on their cognitive traits.  Because liberalism is associated with, among other things, open-mindedness, it might be thought that they would be less vulnerable to the siren song of political bias.  The final account Kahan considers is what he calls the "Expressive Utility Position" (EUP).  This position, unlike both BRP and IAP, posits that motivated reasoning is the most important factor in belief polarization.  A person's primary motivation when looking at data under this view is to protect their identity, and they will do so by means of selectively searching for and interpreting the evidence to fit with their particular in-group.  Reasoning, then, is not inadequately engaged.  Far from it.  Reasoning will tend to magnify ideological differences, not mitigate it, and this will be true across the political spectrum.

So there's the theory; now where's the test?  In the first part of his study, Kahan presented participants with the Cognitive Reflection Test (CRT).  This test is generally used to measure a person's disposition to engage in conscious and effortful information processing as opposed to heuristic-driven processing.  This quick three item test can be found here.  Following the CRT, participants were split into three conditions.  In one condition, participants were told the following:
Psychologists believe the questions you have just answered measure how reflective and open-minded someone is.
The second condition tacked on an additional bit of information:
In one recent study, a researcher found that people who accept evidence of climate change tend to get more answers correct than those who reject evidence of climate change […] a finding that would imply that those who believe climate change is happening are more open-minded than those who are skeptical that climate change is happening.
The third condition replaced the above paragraph with this one:
In one recent study, a research found that people who reject evidence of climate change tend to get more answers correct than those who accept evidence of climate change […] a finding that would imply that those who are skeptical that climate change is happening are more open-minded than those who believe climate change is happening.
Participants were then asked how valid they personally believed the CRT was in assessing how reflective and open-minded they were.  Because open-mindedness is almost universally considered a positive trait, participants have an emotional stake in believing their group to be more (or at least not less) open-minded than their ideological opponents.  Hence, if a liberal were biased in favor of liberalism, they would discount the CRT's validity in the third condition while accepting the CRT's validity in the second.  The opposite would be true for conservatives; they would discount its validity in the second condition and accept it in the third.  All the theories above predict some motivated reasoning, but they each generate different hypotheses about the form that such reasoning would take.


  • IAP predicts that motivated reasoning should be especially pronounced among conservatives.  Liberals, on the other hand, should have roughly similar validity ratings regardless of condition.  Furthermore, conservatives should score worse than liberals on the CRT itself given that the CRT actually does measure cognitive reflection.
  • BRP predicts that people who score poorly on the CRT will be more inclined to express polarized sentiments.  That is, low cognitive reflection will lead to more bias.  This result, however, should hold regardless of one's political affiliation.  Political affiliation should also be unrelated to actual CRT scores.
  • EUP is the reverse of BRP.  It predicts that polarization will increase as CRT score increases.  That is, contrary to BRP, greater cognitive reflection will lead to more bias.  Similar to BRP, however, and unlike IAP, EUP is neutral on political affiliation.

Before moving on to the results, take a moment to guess which theory you think was best supported.

The first step of data analysis was to compare liberals and conservatives on CRT scores.  Contrary to IAP, conservatives actually scored significantly better than liberals on the CRT, a surprising result given the literature on conservatives' other cognitive traits.  The results of the second step of data analysis were also contrary to IAP's predictions.  Both liberals and conservatives split in their perceptions of CRT validity depending on which condition they were in.  Conservatives believed the CRT was valid when it favored their own side but not so when it didn't.  The same pattern was evident among liberals as well.  So much for IAP then…

Next up on the chopping block is… *drum roll* … BRP!  As you'll recall, BRP predicts that as people's CRT scores increase, they should exhibit less ideological bias.  Cognitive reflection, on this view, should have a salutary effect on polarization.  However, this was not borne out by the data.  Indeed, as people scored higher on the CRT, they exhibited more and more partisanship.  In particular, liberals who scored high on the CRT were remarkably more averse to accepting the CRT's validity when in the third condition (in which climate change skeptics appeared more open-minded).  Additionally, conservatives who scored highly on the CRT welcomed it with open arms when in the third condition.  In conjunction, these findings jive more convivially with the predictions of EUP; greater cognitive reflection leads to greater political bias for both liberals and conservatives.

In my previous post, I discussed Jonathan Haidt's Social Intuitionist Model, a model of moral judgment that describes humans in a less than favorable light.  Humans, it is said, arrive at their moral judgments through automatic intuitive processes and later use their powers of reason, not to correct potential errors, but rather to rationalize all those errors away.  Like a magician waving his wand, the motivated reasoner can make any problem disappear.  Of course, the problems are still there.  They're just hiding in a hat now instead of out in the open.  Kahan's research supports this interpretation of moral judgment.  In Kahan's research, those who were most reflective were also the best magicians; they were more able to twist their judgments to fit the narratives of their in-groups.  This picture, though grim and pessimistic, is the one we must look to if we hope to paint a brighter future for humanity.  Kahan's research suggests that we need to not only nudge people's intuitive judgments towards the truth as many others have suggested.  This will only do so much.  Given that a large portion of our bias comes from our more reflective moments, we have to also remove the incentives people face for forming beliefs on grounds unconnected to the truth of such beliefs.  Indeed, in a sense, it's rational for, say, a conservative to believe that gun deregulation will decrease crime.  Not because this belief is particularly supported by evidence but rather because he will be expressing his commitment to his group, and his group will in turn support him.  It's rational, too, for a liberal to believe genetically modified foods are devil spawns, again, not because it's supported by evidence, but because by believing such things, they will be boosting their status within their group.  Removing or circumventing these cultural associations will be the work of future researchers.

Of course, this polarization is not true of every topic.  In fact, most topics don't exhibit such polarization.  (e.g. that the Moon revolves around the Earth, that earthquakes result from shifting tectonic plates, that height is hereditary, etc.)  Like I said at the outset of this post, people are mostly rational.  But for those domains where we have some trouble, like politics, more research is needed to figure out exactly how we get things wrong and how we can fix our mistakes.


Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic perspectives, 25-42.


Thursday, July 2, 2015

Bias and Reasoning: Haidt's theory of moral judgment

Summary:
Opinions about moral and political issues are like iPhones and Facebook profiles, everybody has one, but not everyone makes sure they're up to date.  Oftentimes, people try to preserve their favored opinion by rationalizing away any new evidence.  Indeed, according to Jonathan Haidt's Social Intuitionist Model (SIM) the majority of our moral judgments are arrived at by means of non-conscious automatic intuitive processing and are later justified by post hoc biased reasoning.

In support of this model, Haidt draws on a large body of research that details distortions in human cognition.  For example, it's been found that when expecting to discuss an issue with a partner (especially a friend) whose attitudes are known, people tend to shift their attitudes to align with those of their partner.  When their partner's attitudes are not known, people tend to moderate their opinions to minimize disagreement.  This type of attitude revision is due to what's called the "relatedness motive."  When people want to get along with others, they selectively shift their opinions.

Additionally, another kind of motive that is said to distort our moral judgments is the "coherence motive."  People with this kind of motive want to preserve their identity, and consequently, they eschew evidence that contradicts their core attitudes and beliefs, and they uncritically accept evidence that confirms them.  In one study, for instance, people were given mixed evidence about the efficacy of capital punishment in deterring crime.  Those who went in to the study supporting capital punishment left with greater confidence.  Those who went in to the study against capital punishment also left with greater confidence.  This flies in the face of rational deliberation.  When given evidence inconsistent with one's beliefs, one should lower the confidence of those beliefs.  Hence, the coherence motive may lead to accuracy distortions of our beliefs.

S. Matthew Liao, however, disagrees with Haidt's account of our moral judgments.  He doesn't dispute the fact that we are influenced by our friends or that we seek to preserve our core beliefs and attitudes.  Instead, he disputes that these should properly be considered biases.  For a person to be biased, is to say that they are not epistemically justified in believing certain propositions.  A person may be epistemically unjustified if they lack sufficient evidence to believe in a proposition, or alternatively, if their belief is not grounded in that evidence.

Liao argues that people are typically justified in shifting their beliefs to become consistent with those of their friends.  To see why, consider what it means to be a friend.  A friend is someone whose judgment you typically trust.   When they express a belief, you have reason to believe that their belief is not arbitrarily arrived at.  Further, suppose you and your friend are about equal in intelligence.  It would be positively irrational not to take your friend's opinion into account, and it would be arrogant to suppose that you could not be mistaken.  This reasoning applies about equally as well with strangers.  Suppose you disagree with a person who you have no reason to believe is exceptionally irrational.  Again, given that there's a chance the stranger is correct and you are incorrect, you ought rationally be inclined towards shifting your own confidence, even if just a little bit.  Thus, having the relatedness motive need not entail that a person is biased.

What about the coherence motive?  Liao argues that the coherence motive need not always lead to biased reasoning.  Let's make up a hypothetical example.  Suppose you believe that gun control will lead to fewer violent deaths, and someone else believes the opposite.  Now both of you are given the following two mixed pieces of evidence.

(1) In one state with strict gun control, there is a greater than average number of gang wars, which has lead to more violent deaths.
(2) In one state with lenient gun control, there have been more school shootings.

Here's how you and the other person can both rationally walk away with greater confidence in your initial beliefs.  Suppose you believe, independently of the debate about gun control's relation to violent deaths, that the presence of school shootings decreases the outbreak of future gang wars (somehow).  Suppose the other person believes - again independently of the debate in question - that gang wars lead to fewer school shootings.  You would accept proposition (1) as confirming your beliefs while discounting proposition (2), while the other person would do the opposite.  Thus, you would both rationally leave with greater confidence in your beliefs.

Critique:
Liao's defense of the relatedness motive seems weak.  It's certainly irrational to believe in one's own infallibility.  And it's also irrational to completely discount an epistemic peer's opinion.  But it's also irrational to continue to have one's beliefs shifted after having learned of the reasons behind the disagreement.  Once you know that your friend has a belief because of x, y, and z, the fact that he is your friend becomes irrelevant.  Believing simply because your friend says so is irrational.  Yet it is this kind of shifting of beliefs that (I think) is more common.  It is not that people shift their beliefs because of their epistemic humility, but rather to maintain social relations.  And that is irrational.

Liao's defense of the coherence motive also seems weak.  He concedes that people may be biased towards favoring their initial beliefs.  His argument is simply that belief polarization need not entail that people are biased.  It's an empirical point as to whether or not polarization is, in fact, a result of bias, one which he claims Haidt does not substantiate.  Though all this is true, it obscures where the burden of proof lies.  It is on Liao to show why people systematically gain confidence in their beliefs given mixed evidence.  One would think that if people were assessing the evidence independently of their other beliefs, the statistical variation would be normally distributed.  Instead, people invariably become more confident in their beliefs.  It's Liao that has to explain how this is the case, not the other way around.

The general structure of Liao's arguments is like a wedge.  He tries to show how it's technically possible to account for these results while preserving the rationality of moral judgments.  From this, he suggestively hints that people's moral judgments are in fact rational.  This latter claim, however, is exceptionally lacking in support, and he would do well to acknowledge this more in his paper.


Friday, June 26, 2015

The Meaning of Life

When people think about philosophy, often what they think about is not the nature of the universe or ethics. Instead, what they think about is existential questions like "What's the meaning of life?" I think that this is rubbish and that philosophy needs an image makeover. There's nothing deep about this question, and people should stop asking it.

I once wrote an op-ed about feminism. In that article, I criticized another person's article. That person, let's call them Jane, said that the majority of us are feminists, we just don't know it. A feminist, Jane said, is anyone who believes in equality for men and women. According to a recent poll, indeed, the majority of people do believe in that. Therefore, most of us are feminists. I gave five arguments why Jane was wrong, but I'll only write about one for now. Basically, my argument was that words have meaning only insofar as people give them meaning. If, for instance, I decide to say "frindle" instead of "pen" when referring to writing utensils that use ink, then there's nothing wrong with that. People may not understand me, but I'm not doing anything logically inconsistent. There's nothing about the phonemes in the word "pen" that make them refer to pens more than the phonemes in the word "frindle." When I say "frindle" I'm talking about "a writing utensil that uses ink." Similarly, when I say "feminism" I'm not obligated to mean "movement for the equality of women." I might instead mean "a writing utensil that uses ink." And if I do use the word "feminism" in that way, I'm not doing anything logically inconsistent. People may not understand me, but if I pick up a pen and say that it's a feminism, then I'm not mistaken. And when people talk about "feminism" to refer to fat transgender progressive lesbians, they're also not mistaken when they don't wish to self-identify as feminists.

All this is to say that when someone says that they know the "meaning of life," they're probably correct. When someone says that the meaning of life is to help others, they're probably correct. When someone says that the meaning of life is to hurt others, they're also probably correct. It's just that the way they define "the meaning of life" is different. That's it. There's no big mystery to it. The whole disagreement is about definitions, nothing of actual substance. It's like people arguing about whether or not Pluto is a planet.

To see why it's a matter of no substance, consider what predictions each statement entails. What if the meaning in life is to help others? What kinds of things would you predict about the world? I contend that the world would be the exact same regardless of the meaning of life. (The world would be a lot different if people *believed* the meaning of life was to help others, but that's a different question entirely.) This is not the case for things of substance. For instance, what if humans had three arms? You can imagine a world that's very different (regardless of people's beliefs). Ultimately, then, the question "What is the meaning of life?" is almost equivalent to asking "What's the meaning of frindle?"

This is not to say that it's impossible to be mistaken about the meaning of life. Someone might say that the meaning of life is to worship God. If there turns out not to be a god, then it's likely that the person is mistaken. This is analogous to me say there's a frindle on my desk. If there happens not to be a frindle/pen on my desk, then I'll be mistaken. When most people talk about the meaning of life, however, I don't think their definition is dependent on the existence of an imaginary being, so for most people, the meaning of life is whatever they think it is. It's not complicated. People should stop asking that question to sound philosophical.

Thursday, June 4, 2015

Moral Realism as Moral Motivation: The impact of meta-ethics on everyday decision-making

Summary:
Being primed to believe in moral realism leads to more charitable behavior.

A Closer Look at the Research:
Some things are right or wrong regardless of where or when you happen to live.  If you agree with that statement, you might call yourself a moral realist.  Moral realists believe that at least some statements about morality are true.  In contrast, a moral antirealist would disagree.  They would say either (1) that moral statements do not have any propositional content or (2) that all such statements are false or (3) that the truth of all such statements are dependent on the speaker.  For example, the antirealist might say that the statement "Killing is wrong." is more akin to saying "Boo killing!" than to saying "The act of killing has the property of wrongness."  The statement "Boo killing!" is neither true nor false, hence it doesn't have any propositional content.  Alternatively, the antirealist might say that moral statements do have propositional content but are all false.  So the statement "Killing is wrong." does express something about the world in a way that the statement "Boo killing!" doesn't.  However, the expression is nonetheless false in the same way that the statement "The sun is a planet." is false.  Finally, an antirealist may say that moral statements are true but depend on some fact about the speaker (e.g. their cultural upbringing).  For instance, Person A might say eating meat is morally forbidden, while Person B may say the opposite.  An antirealist might say that both statements are true in virtue of the fact that Person A and Person B were raised in different cultures.

Note: These distinctions are not made in this article.  In the present article, the terms "realism" and "antirealism" are sparsely elaborated.

Liane Young and A.J. Durwin wanted to find out whether a person's position on the realism/antirealism debate would translate into real world consequences.  On the one hand, these distinctions may be considered exceedingly abstract and irrelevant to people's everyday lives.  On the other hand, it seems obvious that strongly held moral convictions can lead one to perform extreme acts of violence or generosity.  Additionally, moral laxity can lead one to disregard norms against cheating or other social taboos.  To suss out the answer to their question, Young and Durwin carried out two studies.

In the first study, a street canvasser approached passersby asking them to donate to a charity.  In one condition, prior to asking for a donation, the canvasser would ask the passerby a question: "Do you agree that some things are just morally right or wrong, good or bad, wherever you happen to be from in the world?" Call this the "realist" condition.  In another condition, the canvasser asked, "Do you agree that our morals and values are shaped by our culture and upbringing, so there are no absolute right answers to any moral questions?"  Call this the "antirealist" condition.  In the final condition, the canvasser didn't ask anything at all.  This was the control condition.

Young and Durwin found that participants in the realist condition were twice as likely to donate as people in the control and antirealist conditions.

Fig 1. Proportion of participants who made charitable donations across three conditions in Experiment 1.

The second study used online participants instead of strangers on the street.  Young and Durwin asked participants how much money (out of $20) they would be willing to donate to charity.  Prior to asking participants this question, Young and Durwin asked participants the realist, the antirealist, or a neutral question that had nothing to do with morality.  (The antirealist question in this study was slightly different from the first study.)

Again, Young and Durwin found that participants in the realist condition were more generous than those in the other two conditions.  Participants in the realist condition were willing to part with more of their money than participants in both the antirealist and control conditions.

How do Young and Durwin explain these results?  They suggest two possibilities.  First, moral realists may perceive their moral obligations more saliently.  That is, realists may feel more guilty if they violate these moral rules or more proud if they uphold them.  By highlighting the "truthiness" of morality, moral realists may become more sensitive to the moral worth of their own actions.

Second, (and less plausibly) priming morality may prime empathic or collective attitudes.  Young and Durwin speculate that emphasizing moral facts that apply to all people may cause people to consider themselves as a part of a collective group.  Consequently, this fellow-feeling could lead people to act more generously.

Limitations
  1. I'm very skeptical of the antirealist prime for the first study.  I see it as a nonsequitur. Disagreements between cultures about moral values does not entail that there are no right answers to any moral questions.  Further, only a portion of antirealists would say that there are no right answers to any moral questions.  As stated above, some antirealists happily bite the bullet and say that there are right answers, but that these answers are dependent on the cultural upbringing of the individual.
  2. The antirealist prime for the second study was similarly problematic.  Young and Durwin asked participants, "Do you agree that our morals and values are shaped by our culture and upbringing, so it is up to each person to discover his or her own moral truths?"  This question presupposes that there such things as moral truths to be discovered.  This prime for antirealism ignores both the antirealists who would deny the propositional content of moral statements and those who accept moral statements' propositional content but deny their truth.
  3. The problem with these primes is brought out by the fact that in the second experiment, the vast majority of participants agreed to the primes regardless of what condition they were in.  In other words, if a participant was in the realist condition, they agreed with the realist prime; if they were in the antirealist condition, they agreed with the antirealist prime.  Given that realism and antirealism are mutually incompatible, you would think that participants' attitudes would be zero sum.  That is, if 60% of participants agreed with realism, then only 40% would agree with antirealism.  What we find instead is that close to 100% of participants agree with both the realism and antirealism primes.  This suggests at least two possibilities.  Either, these primes didn't really tap into the essence of these two concepts.  Or people can hold contradictory beliefs depending on the context.  Young and Durwin support the latter hypothesis.  For my part, I would say that both are true.
  4. In Figure 1 above, the proportion of charitable participants seems ridiculously high.  Really?  50% of participants agreed to donate to charity?  I'm skeptical.  If anyone reading this has been a canvasser in the past, please post your experience in the comments.
  5. (Disclaimer: I haven't read the supplementary materials.  The details might be in there.)  In the first study, Young and Durwin relied on only one canvasser to carry out the experiment.  The beliefs of the canvasser are not reported in the article.   If the canvasser believes in moral realism, this may lead to an expectancy bias.  That is, the canvasser may try harder in the realism condition than in other conditions because they believe that is the condition most likely to elicit donations from participants.
  6. This article didn't study the effect of meta-ethical attitudes on charitable donations but rather the effect of a meta-ethical prime on donations.  It's unclear if these primes actually did cause people to shift their philosophical views, if even for a little bit.  Consequently, it's unclear how this research may become generally applicable in real life in the future.  If the donation rates in study 1 are correct, then this could be a huge leap for charities using street canvassers, but otherwise, this research seems useless to me.

Monday, January 19, 2015

Advantages of Longhand over Laptop Note Taking

Summary:
Longhand note taking is associated with better conceptual recall than laptop note taking.

A Closer Look at the Research:
Laptop note taking is becoming increasingly common among college campuses.  Many professors worry, however, that this practice is less beneficial than students think.  One reason for this concern is that there is a wealth of evidence suggesting that laptops distract students from the course material and consequently lead to less learning in class.  However, even when eliminating the potential for covert Facebook snooping, laptop note taking may still be less helpful than old fashioned longhand note taking.

Psychologists Pam Mueller and Daniel Oppenheimer sought to measure the difference between these two methods of note taking.  In their first study, participants watched a TED Talk while taking notes on either a laptop or notebook.  After a couple distractor tasks, participants were tested on the video they had watched earlier.  The researchers found that participants who had taken longhand notes did significantly better than laptop note takers on conceptual questions, but there was no significant difference on factual questions.



The researchers hypothesized that the difference in performance between the two groups was due to the fact that longhand note takers processed the information in the videos more deeply.  Instead of blindly transcribing the message in the video, longhand note takers synthesized and summarized the information presented to them.  Indeed, when analyzing the content of the participants' notes, the researchers found that a much larger proportion of the laptop note takers' notes were verbatim copies of the video.

In their third experiment, the researchers had participants watch four 7-minute lectures while taking notes on either a laptop or notebook.  Participants were told they would be tested on the lectures a week later.  When participants returned to the lab, one group immediately took the test without reviewing their notes, and another group was allowed to review their notes for 10 minutes before taking the test.  The results are shown below.


The researchers found that participants who took longhand notes and were allowed to study performed better on both factual and conceptual questions than any other group, even laptop note takers who were allowed to study.

Given these results, the researchers suggest that reverting back to longhand note taking may be beneficial for students who wish to learn more in class and get better grades.

Limitations:
The researchers begin their article by providing a brief overview of the research done so far.  Most importantly, they point to the fact that previous research has been limited in the generalizability to the real world.  However, similar limitations afflict their own article as well in a number of ways.
  1. The informational material presented to participants was, in one study, 15 minutes long and in another ~30 minutes long.  This is different from typical college classes, which often range from 50-75 minutes or more.  In these longer lectures, computers may be more helpful for a couple reasons.  It may be difficult to synthesize an entire hour's worth of information by taking notes in a notebook.  By using computers, however, students may review and update their notes as necessary throughout a lecture.  Second, if you'll recall, the researchers think that the deeper mental processing associated with notebook note taking is what leads to better memory.  However, during a longer lecture, students may zone out and stop paying attention.  During these daydreams, it may still be possible for students to transcribe on a computer what the lecturer is saying so that the student can review the information later.  When restricted to notebooks, however, students who daydream are screwed.  They're left with no physical or mental record of what the lecturer said.  A study asking participants to remember more content for a longer duration would be step towards overcoming this limitation.
  2. The researchers had participants take the test one week after viewing the lectures.  This is unlike the test-taking conditions of most students in college.  Oftentimes, students will be required to remember the information covered in several weeks-worth of lectures.  Students will be required to remember information they learned a month or two ago.  Again, in this regard, computer note takers seem to have the advantage.  Students who take notes on their computer are able to write more quickly than their fellow notebook note takers.  And it may be that the additional notes computer note takers have would give them an advantage in the long run.  They would have more to review than notebook note takers and would consequently perform better on memory tests.  A similar study with a longer follow-up time would be appropriate to test this hypothesis.
  3. Participants had no incentive to perform as best as they could.  And, needless to say, this is very different from the situation that college students face.  College students are constantly preoccupied with their grades, and they are painfully aware that their future rests on getting an A in their next test.  Perhaps additional pressure would urge participants in the laptop-study condition to study more rigorously.
  4. Finally, students in college are not practically limited in the amount of time they have to study.  And certainly, no good student would choose to study for just 10 minutes.  Perhaps, given a greater study period, participants in the computer-study condition would perform better.