Pages

Thursday, July 16, 2015

The Emotional Dog Does Learn New Tricks: A Reply to Pizarro and Bloom

In my last post, I discussed criticisms of Jonathan Haidt's Social Intuitionist Model (SIM) of moral judgment.  Haidt sees Pizarro and Bloom as making two important claims: (a) Fast and automatic moral intuitions are shaped and informed by prior reasoning, and (b) people actively engage in reasoning when faced with real world moral dilemmas.

Haidt begins his response to these claims by making two clarifications.  First, as he sees it, intuitionism allows for a great deal of malleability and responsiveness to new information and circumstances.  So although it's true that moral judgments can be altered as a result of differences in cognitive appraisal, this is consistent with the SIM.  Differences in belief as a result of reappraisal will lead to the activation of different intuitions.  This is unremarkable.  If I believe someone is a Nazi, I will likely be unfriendly with them.  If I find out they were forced at gunpoint to become a Nazi, my attitudes will likely change.  More importantly, however, there's a wealth of empirical research that suggests that people don't spontaneously engage in the kinds of reappraisals that Pizarro and Bloom say are so important.  When people do engage in these reappraisals, it's most often the result of social interaction, a key prediction of the SIM.

Second, Haidt fully admits that people may occasionally engage in deep moral reflection, though these occasions are few and far between.  As noted in the previous post, people may selectively craft their situation so as to experience (or not experience) certain emotions or intuitions.  Haidt again doubts how often this happens.  One of Haidt's prescient questions reveals the improbability of this occurrence: "Does it ever happen that a person has a gut feeling of a liberal but a second-order desire to become a conservative?  If so, the person could set out on a several-year program of befriending nice, articulate, and attractive conservatives."  That this picture is ridiculous is evidence in favor of Haidt's model of moral judgment.

Haidt ends by remarking on the scope of moral reasoning.  Pizarro and Bloom contend that moral judgments in everyday life are uniquely deliberative.  Laboratory studies don't capture these kinds of judgments because those made in the lab are detached from any real consequences.  In real life, they say, people make tough decisions about how to treat people equitably or about abortion.  Haidt doesn't disagree that people may be intensely conflicted by these decisions, but he doubts that the people involved actively search for arguments on both sides, weigh the strength of the arguments, and act on the logical entailments of those arguments.  Instead, Haidt characterizes these internal conflicts as conflicts between competing intuitions.  A woman thinking about having an abortion doesn't read Judith Jarvis Thomson or Don Marquis.  Instead, she thinks about her future life, about her social commitments to the father and to her parents, and about the life of her child.  Each of these intuitions tug on her emotions.  The ultimate outcome is the result of the strongest intuitions experienced.

Equally important, Haidt doubts that these tough decisions happen all that often.  He asks the reader to take a mental tally of how often in the past year they've agonized over a moral issue and to compare that to all the times the reader has made a moral judgment after reading a newspaper, participating in gossip, or driving on roads surrounded by drivers less competent than oneself.  Further, even in cases that introspectively appear to be the result of deliberative reasoning, we should be cautious not to deceive ourselves.  Other research has demonstrated that we have a tendency to make up plausible post hoc rationalizations to justify our decisions.

So, despite a fair bit of criticism, the SIM remains consistent with what we observe in the lab and in real life.  People's moral judgments appear to be best characterized as being driven by intuitions and emotions.  Reasoned deliberation is the press secretary, the lawyer, the yes-man that justifies judgments that have already been made.  It's only in rare circumstances that reason plays anything more than a supporting role in the judgment of morality.


Haidt, J. (2003). The emotional dog does learn new tricks: A reply to Pizarro and Bloom (2003).

The Intelligence of Moral Intuitions: Comment on Haidt (2001)

The mark of a good scientist is their willingness to expose themselves to criticism.  Only by listening to the dissenting views of others can we escape our own prejudices and cognitive limitations.  This is particularly important for the views we find most certain.  There's sometimes a tendency to pay lip service to the free expression of ideas when talking about what to eat for dinner or what music sounds best, but when it comes to the ideas we're most committed to, we put up our barriers and shut down the conversation.

In my own investigations, I try to avoid this as much as possible.  I try to seek out criticism.  For instance, in several of my previous posts, I've discussed the Social Intuitionist Model (SIM), a model which I think is approximately correct.  Moral judgments are predominantly caused by our automatic intuitions, and reasoned deliberation takes a back seat.  However, my posts have not been uniformly uncritical.  It's in this vein that I'd like to discuss another criticism of Haidt's model of moral judgment.

In response to Haidt, Pizarro and Bloom point out situations in which people do deliberate about what moral actions to take.  They suggest several approaches by which this might occur.  First, people may deliberately change their appraisal of a situation.  Imagine, for instance, being told that you will watch a gory factory accident.  You have the ability to choose to take a detached analytical mindset or not.  If you do, your emotional reaction to the event will consequently be diminished.  The fact that this is possible suggests that our intuitions aren't simple on off switches.  Rather, deliberation may modulate the effect of our intuitive reactions.

A second way in which we can alter our moral judgments by the force of reason is by exerting control over the situations which we encounter.  Imagine a person on a diet throwing away all their junk food or a drug addict flushing their drugs.  It's true that if these people saw the enticing cues, their desires would flare up and be near implacable.  Nevertheless, the initial act of defiance is the product of careful deliberation.

Moreover, Pizarro and Bloom point out that many decisions are made in opposition to prevailing societal mores.  Examples include "righteous Gentiles" in Nazi Germany, children who insist on becoming vegetarians within nonvegetarian families, college professors who defend the abolistion of tenure, etc.  Because the SIM suggests that our moral judgments are often driven by a desire to fit in with the crowd, these exemplary decisions to buck the trend cast doubt upon Haidt's model.

Finally, Pizarro and Bloom suggest that some questions of morality cannot be answered by simple intuitions.  Haidt's stimuli consist of contrived hypothetical scenarios.  Real life, however, is filled with questions like: How much should I give to charity?  What is the proper balance of work and family?  What are my obligations to my friends?  These questions require careful deliberation, and they don't admit of quick intuitive responses.

For these reasons, Pizarro and Bloom defend the rationalist position.  Though some moral judgments are no doubt driven by intuitive responses, reason remains the dominant source of moral judgments in everyday life.

Normally, I would provide my comments at this point.  However, Haidt himself has responded directly to Pizarro and Bloom, a response which will be the subject of my next post.


Tuesday, July 7, 2015

Motive attribution asymmetry for love vs. hate drives intractable conflict

Scott Alexander over at Slate Star Codex is one of the best writers around today.  He not only offers a seemingly endless supply of wisdom and discernment about contemporary issues.  His short stories are equally full of remarkable insight.  For instance, one recent story includes a character, named Yellow, that swallows a yellow pill, allowing that character to read and search the mind of anyone they see.  Whereas many people might exult at finally finding out everybody's dirty little secret and perhaps turn to a life of crime as the world's most successful blackmailer, Yellow becomes a forest ranger after making a startling discovery:
People's minds are heartbreaking.  Not because people are so bad, but because they're so good.
Nobody is the villain of their own life story.  Everybody thinks of themselves as an honest guy or gal just trying to get by, constantly under assault by circumstances and The System and hundreds and hundreds of assholes. They don’t just sort of believe this. They really believe it. You almost believe it yourself, when you’re deep into a reading.
Yellow's instant comprehension of everybody's internal struggle makes social interaction a Herculean burden, so he removes himself from the incessant heartache to a little cabin in the woods.

This brings me to the paper I've read most recently.  Across five studies, researchers demonstrated what they call "motive attribution asymmetry."   When someone is talking about their in-group, engagement in conflict is considered to be motivated by love for the in-group.  When talking about the out-group, however, engagement in conflict is thought to be more motivated by hatred.  This pattern was found for several populations.  Democrats believed Democrats were motivated by love for Democrats.  And Republicans believed Republicans were motivated by love for Republicans.  However, Democrats believed Republicans were motivated by hate for Democrats.  And Republicans believed Democrats were motivated by hate for Republicans.
Israelis' attributions of both Israelis and Palestinians

Palestinians' attributions of both Israelis and Palestinians
The same pattern of love-for-us and hate-for-them was true of Israelis and Palestinians.  One exception, however, is that Palestinians attributed both love for Palestinians and hate for Israelis as important motives for Palestinians engaging in conflict.  Graphs of these patterns can be seen below. 

This motive attribution asymmetry was correlated with a raft of other beliefs and intentions about intractable conflict.  In an Israeli population, those who most exhibited this tendency were less willing to negotiate, believed that a win-win scenario was less likely, were less likely to vote for a peace deal, believed Palestinians were less likely to vote for a peace deal, and believed Palestinians were essentially unchangeable.

Unlike those of many other papers, the authors of this paper were not content with simply documenting yet another bias in the crowded menagerie of biases in the human mind.  They went further by exploring a potential corrective for this motive attribution asymmetry.  In one experiment, the researchers gave both Democrats and Republicans a monetary incentive to accurately attribute motives to the opposing party.  They found that people became remarkably more moderate when money was on the line.  People attributed love more and hate less as a motive for the out-group.

Despite their impressive empirical work, the researchers' theoretical explanations for these findings are a bit lackluster.  One potential mechanism, they say, is that people don't often observe out-group members expressing love.  Rather their observations of love are almost exclusively perceived among in-group members.  Hate, on the other hand, is quite evident among out-group members in situations of conflict.  Thus, the motive attribution asymmetry.  This explanation, however, is inconsistent with the fact that incentivized participants were able to more accurately attribute motives of love and hate.  If observations of one's in-group and out-group were the whole story, one wouldn't expect to find people changing their minds once money was on the line.

Another potential explanation for these findings, they say, is that people are engaging in motivated reasoning.  They are convincing themselves that the out-group lacks any prosocial values as a means of dehumanizing them and justifying harm towards them.  When presented with a monetary incentive, however, a new motive kicks in, the motive to be accurate.  This explanations seems much more plausible and reflects the wisdom of the maxim: "A bet is a tax on bullshit."  It's unclear, however, in what other circumstances people might become motivated to report the truth.  Money is a great incentive, but what about a safe and stable nation?  Why aren't Israelis, Democrats, and Republicans incentivized to resolve conflicts for the good of their nation?  Wouldn't interminable conflict be a tremendous cost to the nation? If so, then true patriots should want to end conflicts as efficiently as possible and be more incentivized to accurately perceive an opposing group's motives.  Further research on the conditions that change intergroup attributions would be a natural next step.  Research on interpersonal attributions would also be of interest.  Would a similar pattern emerge among individuals or is this effect a result of groupish norms?  Would this result be found among different cliques in high school?  How about in sports?  More research along these lines is necessary for exposing the exact mechanism for this phenomenon.

If people really knew why people engaged in conflict, there would be much less conflict.  Unfortunately, people aren't mind readers, not even close.  Instead, the results suggest people are systematically predisposed to misinterpret other people's minds.  It requires effort to actually figure out what other people are thinking, an effort that is all too often shirked in favor of scoring points for the in-group.  Although people pay lip service to resolving conflicts positively, that won't be possible until people are willing to take steps towards gaining an impartial understanding of their opposition's goals and desires.


Sunday, July 5, 2015

Moral Dumbfounding: When intuition finds no reason

In the past couple posts, I've  mentioned Jonathan Haidt's Social Intuitionist Model quite a bit.  Intuitions dominate our initial moral impressions and rather than correcting these intuitions, our faculty of reason makes these impressions stronger.  As Haidt puts it, "Reason is the press-secretary of the intuitions, and can pretend to no other office than that of ex-post facto spin doctor."  Perhaps the most intriguing support for this model is what Haidt calls "moral dumbfounding," which will be the subject of this post.

To test his model, Haidt had 30 college students answer questions about a series of dilemmas.  One was a dilemma well known in the history of moral psychology, the "Heinz dilemma."  Here's the dilemma in full.
In Europe, a woman was near death from a very bad disease, a special kind of cancer.  There was one drug that the doctors thought might save her. It was a form of radium for which a druggist was charging ten times what the drug cost him to make. The sick woman's husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about half of what it cost. He told the druggist that his wife was dying, and asked him to sell it cheaper or let him pay later. But the druggist said, "No, I discovered the drug and I'm going to make money from it." So, Heinz got desperate and broke into the man's store to steal the drug for his wife. Was there anything wrong with what he did?
Given that this dilemma involves explicit tradeoffs between competing interests, Haidt predicted that people would easily engage in dispassionate moral reasoning.

As a comparison for the Heinz dilemma, participants were presented with two other dilemmas that didn't exhibit any obvious harm to any character.  The more famous of these stories is now known simply as "the Mark and Julie dilemma."
Julie and Mark, who are brother and sister are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach.  They decide that it would be interesting and fun if they tried making love. At very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?
Participants were also presented with a dilemma in which a medical student, Jennifer, eats a bit of a cadaver's flesh.  This cadaver was going to be incinerated the next day, so no one would miss it.  It was expected that participants would quickly and intuitively recognize the wrongness of Mark and Julie's actions as well as Jennifer's actions.  However, they would be at pains to verbally explain exactly why these characters had done something wrong.

Finally, participants were presented with two tasks that were expected to elicit strong intuitions, though they were non-moral in nature.  In one task, participants were asked to drink a glass of juice both before and after a sterilized cockroach had been dipped in it.  The other task involved the participants signing their name on a piece of paper.  On this paper, were the words "I, (participant's name), hereby sell my soul, after my death, to [the experimenter], for the sum of two dollars."  At the bottom of the page was a note that read: "This is NOT a legal or binding contract, in any way."  Despite the apparent harmlessness of each of these actions, participants were unsurprisingly uncomfortable with these tasks.

After the participant read each dilemma and gave their judgment, an experimenter would "argue" with the participants.  The experimenter would non-aggressively undermine whatever reason the participant put forth in support of their judgment.  For example, if a participant said that Mark and Julie did something wrong because they might have a deformed child, the experimenter would remind the participant that Julie was taking birth control pills and Mark used a condom.  If the participant said that what Heinz did was ok because his wife needed it to survive, the experimenter would ask if it would be ok to steal if a stranger needed it or if a beloved pet dog needed it.  The same procedure was followed for the two behavioral tasks the participants were asked to perform.  If a participant chose not to drink the cockroach-dipped-juice, the experimenter would remind them that the roach was sterile.  If a participant refused to sign the piece of paper, the experimenter would remind them that it wasn't a real contract.

While participants were going through the rigmarole of this experiment, Haidt was recording their verbal and nonverbal responses.  Afterwards, Haidt had participants fill out a self-report survey asking them how confident they were in their judgments, how confused they were, how irritated they were, how much they had changed their mind from their initial judgment, how much their judgment was based on gut feelings, and how much their judgment was based on reason.

Haidt found striking differences between the Heinz dilemma and the other dilemmas.  When responding to the Heinz dilemma, participants tended to provide reasons before announcing their judgment.  They reported that their judgments were based more on reason than on gut feelings.  Their judgments were relatively stable and held with high confidence.  And they rarely said that they couldn't explain their judgments.  The other dilemmas left participants in a much different mental state.  Participants reported being more confused and less confident in their judgments.  They relied more on their gut than on their reasoning; after gentle probing from the experimenter, participants dropped most of the arguments they put forward, and they frequently admitted that they couldn't find any reasons for their judgments.  This observation, in particular, is what Haidt calls "moral dumbfounding."  Participants maintained their judgments despite the inability to articulate their reasons.  They were, in a sense, struck speechless, or dumbfounded.  Furthermore, participants often verbally expressed their dumbfoundedness.  These verbal expressions were made 38 times in response to the incest story but only twice in response to the Heinz dilemma.  Participants' responses to the behavioral tasks provided a mixed bag of results but were similar to the Mark and Julie dilemma in several respects.   Participants were unconfident in their decision not to sign the piece of paper, and they didn't believe their decision was the result of rational deliberation.  And they often said they couldn't think of any reason, but they maintained their decision nonetheless.

Prior to this study, moral psychologists had almost exclusively presented participants with just the Heinz dilemma.  Consequently, psychologists inferred that moral judgment was largely the result of conscious effortful reasoning.  In hindsight, this conclusion seems obviously invalid.  This is like showing people pictures of cute cats, asking them how they feel, and inferring that humans only feel happiness.  It's odd that this was the dominant paradigm for decades.  Perhaps I'm missing something, but Haidt's insight seems like an obvious idea whose time had come.  It's now quite evident that, at least under certain circumstances, people may eschew reason in favor of their gut instincts.  Haidt's study doesn't warrant the conclusion that the majority of moral judgment is intuitively driven, but it does provide another brick in the wall of evidence for the Social Intuitionist Model.


Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript, University of Virginia.

Ideology, Motivated Reasoning, and Cognitive Reflection

Despite what many social psychologists claim, humans are mostly rational.  When we are thirsty, we drink water.  When're hungry, we eat food.  When we're on the edge of cliff about to fall off, we look for ways to save ourselves.  For the large majority of decisions we make everyday, humans typically take actions that will satisfy our goals.  In certain circumstances, however, we are systematically biased.  Take, for instance, the domain of politics.  It's understandable for there to be disputes over values: privacy vs security, economic freedom vs equality, etc.  What's less understandable, however, is the fact that people get heated up over facts.  Why do liberals believe that humans are the cause of climate change while conservatives don't?  Why do conservatives believe that gun control would increase crime while liberals believe the opposite?  Liberals will say it's because conservatives are biased, and conservatives will say it's because liberals are biased.  Who's correct?

To begin, let's catalogue the factors that may lead to ignorance of the empirical facts.  There are at least three.  One factor is that humans aren't always too thoughtful or deliberate.  Instead, we use heuristics, mental shortcuts for arriving at a desired outcome.  For instance, we rely on scientific experts to tell us the truth rather than seeking it out for ourselves.  Our choice of experts, however, is also often the result of nondeliberative thinking.  This unfortunate quirk of human decision making may lead people to become uninformed or misinformed.  A second factor is motivated reasoning.  That is, even if people engage their more reflective cognitive processes, they may do so in a way that steers them away from the truth.   When a person is motivated to maintain a relationship or preserve their identity, they may selectively interpret the evidence to suit their non-truth-attainment goals.  Finally, a person may have a certain kind of reasoning style that interferes with the attainment of truth.  There is a considerable body of research, for instance, that conservativism is associated with dogmatism, an aversion to complexity, and a craving for closure in an argument.  These cognitive traits may hinder a person in their pursuit of truth.

It's unclear, however, how these three factors interact to generate belief polarization.  In his article, Dan Kahan outlines three possibilities.  First is what he calls the "Bounded Rationality Position" (BRP).   According to BRP, our heuristic-driven reasoning is the most decisive factor in generating public discord over empirical matters.  On this view, laypeople inadequately engage in effortful information processing.  As a heuristic, then, these nondeliberative folk will tend to trust the received wisdom of their particular in-group, which in turn will lead to greater belief polarization.  A second alternative is what Kahan calls the "Ideological Asymmetry Position" (IAP).  IAP posits that right-wing ideology matters most in distorting empirical judgments.  Like BRP, IAP takes reasoning to be heuristic-driven and inadequately engaged.  This is said to be especially true of conservatives in light of previous correlative research on their cognitive traits.  Because liberalism is associated with, among other things, open-mindedness, it might be thought that they would be less vulnerable to the siren song of political bias.  The final account Kahan considers is what he calls the "Expressive Utility Position" (EUP).  This position, unlike both BRP and IAP, posits that motivated reasoning is the most important factor in belief polarization.  A person's primary motivation when looking at data under this view is to protect their identity, and they will do so by means of selectively searching for and interpreting the evidence to fit with their particular in-group.  Reasoning, then, is not inadequately engaged.  Far from it.  Reasoning will tend to magnify ideological differences, not mitigate it, and this will be true across the political spectrum.

So there's the theory; now where's the test?  In the first part of his study, Kahan presented participants with the Cognitive Reflection Test (CRT).  This test is generally used to measure a person's disposition to engage in conscious and effortful information processing as opposed to heuristic-driven processing.  This quick three item test can be found here.  Following the CRT, participants were split into three conditions.  In one condition, participants were told the following:
Psychologists believe the questions you have just answered measure how reflective and open-minded someone is.
The second condition tacked on an additional bit of information:
In one recent study, a researcher found that people who accept evidence of climate change tend to get more answers correct than those who reject evidence of climate change […] a finding that would imply that those who believe climate change is happening are more open-minded than those who are skeptical that climate change is happening.
The third condition replaced the above paragraph with this one:
In one recent study, a research found that people who reject evidence of climate change tend to get more answers correct than those who accept evidence of climate change […] a finding that would imply that those who are skeptical that climate change is happening are more open-minded than those who believe climate change is happening.
Participants were then asked how valid they personally believed the CRT was in assessing how reflective and open-minded they were.  Because open-mindedness is almost universally considered a positive trait, participants have an emotional stake in believing their group to be more (or at least not less) open-minded than their ideological opponents.  Hence, if a liberal were biased in favor of liberalism, they would discount the CRT's validity in the third condition while accepting the CRT's validity in the second.  The opposite would be true for conservatives; they would discount its validity in the second condition and accept it in the third.  All the theories above predict some motivated reasoning, but they each generate different hypotheses about the form that such reasoning would take.


  • IAP predicts that motivated reasoning should be especially pronounced among conservatives.  Liberals, on the other hand, should have roughly similar validity ratings regardless of condition.  Furthermore, conservatives should score worse than liberals on the CRT itself given that the CRT actually does measure cognitive reflection.
  • BRP predicts that people who score poorly on the CRT will be more inclined to express polarized sentiments.  That is, low cognitive reflection will lead to more bias.  This result, however, should hold regardless of one's political affiliation.  Political affiliation should also be unrelated to actual CRT scores.
  • EUP is the reverse of BRP.  It predicts that polarization will increase as CRT score increases.  That is, contrary to BRP, greater cognitive reflection will lead to more bias.  Similar to BRP, however, and unlike IAP, EUP is neutral on political affiliation.

Before moving on to the results, take a moment to guess which theory you think was best supported.

The first step of data analysis was to compare liberals and conservatives on CRT scores.  Contrary to IAP, conservatives actually scored significantly better than liberals on the CRT, a surprising result given the literature on conservatives' other cognitive traits.  The results of the second step of data analysis were also contrary to IAP's predictions.  Both liberals and conservatives split in their perceptions of CRT validity depending on which condition they were in.  Conservatives believed the CRT was valid when it favored their own side but not so when it didn't.  The same pattern was evident among liberals as well.  So much for IAP then…

Next up on the chopping block is… *drum roll* … BRP!  As you'll recall, BRP predicts that as people's CRT scores increase, they should exhibit less ideological bias.  Cognitive reflection, on this view, should have a salutary effect on polarization.  However, this was not borne out by the data.  Indeed, as people scored higher on the CRT, they exhibited more and more partisanship.  In particular, liberals who scored high on the CRT were remarkably more averse to accepting the CRT's validity when in the third condition (in which climate change skeptics appeared more open-minded).  Additionally, conservatives who scored highly on the CRT welcomed it with open arms when in the third condition.  In conjunction, these findings jive more convivially with the predictions of EUP; greater cognitive reflection leads to greater political bias for both liberals and conservatives.

In my previous post, I discussed Jonathan Haidt's Social Intuitionist Model, a model of moral judgment that describes humans in a less than favorable light.  Humans, it is said, arrive at their moral judgments through automatic intuitive processes and later use their powers of reason, not to correct potential errors, but rather to rationalize all those errors away.  Like a magician waving his wand, the motivated reasoner can make any problem disappear.  Of course, the problems are still there.  They're just hiding in a hat now instead of out in the open.  Kahan's research supports this interpretation of moral judgment.  In Kahan's research, those who were most reflective were also the best magicians; they were more able to twist their judgments to fit the narratives of their in-groups.  This picture, though grim and pessimistic, is the one we must look to if we hope to paint a brighter future for humanity.  Kahan's research suggests that we need to not only nudge people's intuitive judgments towards the truth as many others have suggested.  This will only do so much.  Given that a large portion of our bias comes from our more reflective moments, we have to also remove the incentives people face for forming beliefs on grounds unconnected to the truth of such beliefs.  Indeed, in a sense, it's rational for, say, a conservative to believe that gun deregulation will decrease crime.  Not because this belief is particularly supported by evidence but rather because he will be expressing his commitment to his group, and his group will in turn support him.  It's rational, too, for a liberal to believe genetically modified foods are devil spawns, again, not because it's supported by evidence, but because by believing such things, they will be boosting their status within their group.  Removing or circumventing these cultural associations will be the work of future researchers.

Of course, this polarization is not true of every topic.  In fact, most topics don't exhibit such polarization.  (e.g. that the Moon revolves around the Earth, that earthquakes result from shifting tectonic plates, that height is hereditary, etc.)  Like I said at the outset of this post, people are mostly rational.  But for those domains where we have some trouble, like politics, more research is needed to figure out exactly how we get things wrong and how we can fix our mistakes.


Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic perspectives, 25-42.


Thursday, July 2, 2015

Bias and Reasoning: Haidt's theory of moral judgment

Summary:
Opinions about moral and political issues are like iPhones and Facebook profiles, everybody has one, but not everyone makes sure they're up to date.  Oftentimes, people try to preserve their favored opinion by rationalizing away any new evidence.  Indeed, according to Jonathan Haidt's Social Intuitionist Model (SIM) the majority of our moral judgments are arrived at by means of non-conscious automatic intuitive processing and are later justified by post hoc biased reasoning.

In support of this model, Haidt draws on a large body of research that details distortions in human cognition.  For example, it's been found that when expecting to discuss an issue with a partner (especially a friend) whose attitudes are known, people tend to shift their attitudes to align with those of their partner.  When their partner's attitudes are not known, people tend to moderate their opinions to minimize disagreement.  This type of attitude revision is due to what's called the "relatedness motive."  When people want to get along with others, they selectively shift their opinions.

Additionally, another kind of motive that is said to distort our moral judgments is the "coherence motive."  People with this kind of motive want to preserve their identity, and consequently, they eschew evidence that contradicts their core attitudes and beliefs, and they uncritically accept evidence that confirms them.  In one study, for instance, people were given mixed evidence about the efficacy of capital punishment in deterring crime.  Those who went in to the study supporting capital punishment left with greater confidence.  Those who went in to the study against capital punishment also left with greater confidence.  This flies in the face of rational deliberation.  When given evidence inconsistent with one's beliefs, one should lower the confidence of those beliefs.  Hence, the coherence motive may lead to accuracy distortions of our beliefs.

S. Matthew Liao, however, disagrees with Haidt's account of our moral judgments.  He doesn't dispute the fact that we are influenced by our friends or that we seek to preserve our core beliefs and attitudes.  Instead, he disputes that these should properly be considered biases.  For a person to be biased, is to say that they are not epistemically justified in believing certain propositions.  A person may be epistemically unjustified if they lack sufficient evidence to believe in a proposition, or alternatively, if their belief is not grounded in that evidence.

Liao argues that people are typically justified in shifting their beliefs to become consistent with those of their friends.  To see why, consider what it means to be a friend.  A friend is someone whose judgment you typically trust.   When they express a belief, you have reason to believe that their belief is not arbitrarily arrived at.  Further, suppose you and your friend are about equal in intelligence.  It would be positively irrational not to take your friend's opinion into account, and it would be arrogant to suppose that you could not be mistaken.  This reasoning applies about equally as well with strangers.  Suppose you disagree with a person who you have no reason to believe is exceptionally irrational.  Again, given that there's a chance the stranger is correct and you are incorrect, you ought rationally be inclined towards shifting your own confidence, even if just a little bit.  Thus, having the relatedness motive need not entail that a person is biased.

What about the coherence motive?  Liao argues that the coherence motive need not always lead to biased reasoning.  Let's make up a hypothetical example.  Suppose you believe that gun control will lead to fewer violent deaths, and someone else believes the opposite.  Now both of you are given the following two mixed pieces of evidence.

(1) In one state with strict gun control, there is a greater than average number of gang wars, which has lead to more violent deaths.
(2) In one state with lenient gun control, there have been more school shootings.

Here's how you and the other person can both rationally walk away with greater confidence in your initial beliefs.  Suppose you believe, independently of the debate about gun control's relation to violent deaths, that the presence of school shootings decreases the outbreak of future gang wars (somehow).  Suppose the other person believes - again independently of the debate in question - that gang wars lead to fewer school shootings.  You would accept proposition (1) as confirming your beliefs while discounting proposition (2), while the other person would do the opposite.  Thus, you would both rationally leave with greater confidence in your beliefs.

Critique:
Liao's defense of the relatedness motive seems weak.  It's certainly irrational to believe in one's own infallibility.  And it's also irrational to completely discount an epistemic peer's opinion.  But it's also irrational to continue to have one's beliefs shifted after having learned of the reasons behind the disagreement.  Once you know that your friend has a belief because of x, y, and z, the fact that he is your friend becomes irrelevant.  Believing simply because your friend says so is irrational.  Yet it is this kind of shifting of beliefs that (I think) is more common.  It is not that people shift their beliefs because of their epistemic humility, but rather to maintain social relations.  And that is irrational.

Liao's defense of the coherence motive also seems weak.  He concedes that people may be biased towards favoring their initial beliefs.  His argument is simply that belief polarization need not entail that people are biased.  It's an empirical point as to whether or not polarization is, in fact, a result of bias, one which he claims Haidt does not substantiate.  Though all this is true, it obscures where the burden of proof lies.  It is on Liao to show why people systematically gain confidence in their beliefs given mixed evidence.  One would think that if people were assessing the evidence independently of their other beliefs, the statistical variation would be normally distributed.  Instead, people invariably become more confident in their beliefs.  It's Liao that has to explain how this is the case, not the other way around.

The general structure of Liao's arguments is like a wedge.  He tries to show how it's technically possible to account for these results while preserving the rationality of moral judgments.  From this, he suggestively hints that people's moral judgments are in fact rational.  This latter claim, however, is exceptionally lacking in support, and he would do well to acknowledge this more in his paper.