Utilitarian preferences or action preferences? De-confounding action and moral code in sacrificial dilemmas☆
Introduction
In recent decades, psychology and neuroscience have increasingly turned their attention to the study of moral judgment. One of the most common methods used to study moral judgment entails presenting hypothetical sacrificial dilemmas in which participants choose whether to endorse harming one person in service of a greater good (e.g., Bartels, 2008, Greene et al., 2001). Responses to such dilemmas are frequently used to infer people's relative preferences for utilitarian (i.e., impartial welfare-maximizing) versus deontological (i.e., rights- or duty-based) moral codes (e.g., Lee & Gino, 2015). Such is the level of interest in sacrificial dilemma research that it has even penetrated debates in normative ethics on the relative merits of deontological and utilitarian moral codes (Berker, 2009, Greene, 2003, Singer, 2005).
Despite widespread popularity as measures of moral preferences, there has been a notable lack of research on the construct validity of sacrificial dilemmas as indicators of people's preferred moral code (i.e., whether sacrificial dilemmas can be considered a valid measure of utilitarian vs. deontological preferences). Although sacrificial dilemma responses are frequently still framed as “utilitarian” or “deontological” choices (Lee & Gino, 2015), recent studies suggest that responses to sacrificial dilemmas do not correlate with other variables in ways expected of a measure of utilitarian versus deontological preferences (Bartels and Pizarro, 2011, Bauman et al., 2014, Duke and Bègue, 2015, Kahane et al., 2015, Rosas and Koenigs, 2014). We aim to further this line of research by addressing a largely unexamined issue concerning the construct validity of sacrificial dilemmas: the confounding of the endorsement of utilitarian outcomes with the endorsement of action.
In standard sacrificial dilemmas, participants choose between two options: acting to uphold a “utilitarian” moral code, or omitting action to uphold a “deontological” moral code (see Supplementary materials for examples). Thus, the distinctions between endorsing utilitarian and deontological moral codes, and acting versus omitting (referred to as “Moral Code” and “Action” for short) are often perfectly confounded. On no occasion, to our knowledge, have these factors been thoroughly teased apart. Without such de-confounding, it is impossible to know whether responses to these dilemmas are driven by Action- versus Moral Code-related preferences (or both).1
An important implication of this confound is that previous research demonstrating a relationship between some predictor (e.g., emotion, reward sensitivity, or behavioral disinhibition) and sacrificial dilemmas responses (Choe and Min, 2011, Moore et al., 2011, Pastötter et al., 2013, Seidel and Prinz, 2012, Strohminger et al., 2011, Valdesolo and Desteno, 2006, van den Bos et al., 2011), may instead be demonstrating a relationship between that predictor and Action (i.e., willingness to endorse intervention in a situation, irrespective of the implied moral code). Critically, if Action (even partly) drives responses to sacrificial dilemmas, existing results cannot be unambiguously interpreted as reflecting psychological processes underlying the application of, or preferences for, specific Moral Codes. The problem for the field of moral psychology is that the extent to which Action, rather than Moral Code, drives responses to sacrificial dilemmas, remains unknown.
Although this confound has been acknowledged in previous work (Baron et al., 2015, Conway and Gawronski, 2013), only one study has given it any kind of empirical treatment. Baron et al. (2015) presented two studies in which standard sacrificial dilemmas are administered alongside another set of dilemmas (called “rule dilemmas”) in which participants judged the moral acceptability of two different actions: one in which a rule was actively followed, producing a bad outcome, and the other in which a rule was actively broken, producing a (relatively) better outcome. While proponents of standard sacrificial dilemmas would expect a strong positive correlation between the two, across these two studies, Baron et al. observe correlations between these two sets of dilemmas of just 0.20 and 0.31. The small correlations suggest that the standard dilemmas and rule dilemmas may be measuring separate but related constructs (thus affirming concerns about the Action confound). However, the implications of these findings are unclear. Because the two sets of dilemmas were not closely matched on other characteristics (e.g., the nature of the scenario and the magnitude of the consequences of the response options), it is possible that the correlation between these two sets of dilemmas may have been attenuated by other differences between the sets.
A necessary first step in addressing the inferential issues outlined above is to de-confound Action and Moral Code. To achieve this, we conducted two studies in which participants responded to both (a) standard sacrificial dilemmas which required participants to judge the acceptability of performing a sacrificial action themselves (i.e., the “utilitarian” responses required action), and (b) subtly modified versions of the same dilemmas in which participants judged the moral acceptability of stopping a third person from performing the sacrificial action (i.e., the “utilitarian” response required omission). Thus, across the two versions of the same dilemma, responding consistently for one dimension (e.g., Moral Code) required responding inconsistently for the other (Action).
To illustrate, imagine two people: a “utilitarian” whose responses are driven by a utilitarian moral code, and an “interventionist,” whose responses are driven by a preference for intervening in moral situations. Both prefer flipping the switch to save lives in the original Trolley Problem, but in the modified Trolley Problem, their responses diverge: the utilitarian should prefer allowing somebody else to flip the switch, whereas the interventionist should prefer stopping the other person from flipping the switch.2 If standard sacrificial dilemmas were valid indicators of one's preferred moral code (a hypotheses we refer to as the “utilitarian hypothesis” for short), we would expect the manipulation of Action (i.e., whether action or omission leads to the “utilitarian” response) to have minimal effect on participants' preferred moral code within two variants of the same dilemma. If, however, participants endorsed different moral codes in different versions of the same dilemma (i.e., they were influenced by Action), our confidence in the utilitarian hypothesis would be undermined.
Section snippets
Method
Given the substantial overlap between the methods employed in the two studies, we report all methods and results together.
Sacrificial dilemmas
In both studies, after providing informed consent, each participant responded to two versions of three sacrificial dilemmas based on the set of Moore et al. (2008; see Supplementary materials). For standard dilemmas, utilitarian moral code was aligned with acting (as is typical in sacrificial dilemma research). In the modified dilemmas, the number of people at risk, victim characteristics and means of sacrifice were identical to those of the corresponding standard dilemma, however a bystander
Correlational analyses
As a first step, we computed correlations between acceptability judgments for all three dilemma pairs across both studies. Whereas the utilitarian hypothesis would predict strong negative correlations between responses to each dilemma pair, the six correlations ranged from − 0.02 to − 0.19, with an average of − 0.12 (similar to Baron et al., 2015).
Distinguishing sensitivity to action vs. moral code
To better understand the apparent inconsistency in participants' responses, our primary analysis entailed a form of Observation Oriented Modeling (see
Discussion
Over the last decade, there has been a proliferation of studies employing sacrificial dilemmas to examine the psychological and biological factors underlying preferences for utilitarian versus deontological moral codes. The aim of this paper was to explore a largely overlooked confound built into standard sacrificial dilemmas: the confounding of Action (endorsing action versus inaction) and Moral Code (endorsing a utilitarian versus deontological moral code). Across two studies we found that,
Conclusion
Despite the widespread use of sacrificial dilemmas to measure people's relative preferences for utilitarian versus deontological moral principles, recent studies have highlighted a concerning lack of evidence for their validity for such purposes. We showed that responses to sacrificial dilemmas seem just as likely to be driven by action preferences as by moral preferences. As well as providing further cause for concern about the validity of sacrificial dilemmas, our findings raise the
Acknowledgements
This project was funded by an internal grant provided by The University of Melbourne. We are grateful to Chelsea Corless and Michael Susman for providing valuable feedback on previous drafts and assistance with data collection for Study 2, and to Margaret Webb, Jonathan Baron, and members of the Melbourne Moral Psychology Lab, and the Macquarie University Centre for Agency, Values and Ethics for feedback on previous versions of this work.
References (30)
- et al.
Why does the cognitive reflection test (sometimes) predict utilitarian moral judgment (and other things)?
Journal of Applied Research in Memory and Cognition
(2015) Principled moral sentiment and the flexibility of moral judgment and decision making
Cognition
(2008)- et al.
The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas
Cognition
(2011) - et al.
Dissociable effects of serotonin and dopamine on the valuation of harm in moral decision making
Current Biology
(2015) - et al.
The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas
Cognition
(2015) - et al.
“Utilitarian” judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good
Cognition
(2015) - et al.
Poker-faced morality: Concealing emotions leads to utilitarian decision making
Organizational Behavior and Human Decision Processes
(2015) - et al.
Individual differences in sensitivity to reward and punishment predict moral judgment
Personality and Individual Differences
(2011) - et al.
To push or not to push? Affective influences on moral judgment depend on decision frame
Cognition
(2013) - et al.
Divergent effects of different positive emotions on moral judgment
Cognition
(2011)
Limiting the scope of moral obligations to help: A cross-cultural investigation
Journal of Cross-Cultural Psychology
Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology
Social and Personality Psychology Compass
The normative insignificance of neuroscience
Philosophy & Public Affairs
Amazon's mechanical Turk: A new source of inexpensive, yet high-quality, data?
Perspectives on Psychological Science
Who makes utilitarian judgments? The influences of emotions on utilitarian judgments
Judgment and Decision making
Cited by (0)
- ☆
The authors declare that there are no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.