Using economic experiments to assess the validity of stated preference contingent behavior responses

https://doi.org/10.1016/j.jeem.2022.102659Get rights and content

Abstract

Contingent behavior (CB), a stated preference (SP) method, elicits individuals’ intentions about behavior in quantities or frequencies under hypothetical scenarios. CB has primarily been used to elicit preferences in recreation demand models or to assess market demand. Although CB shares the hypothetical nature of other SP methods, there has been limited assessment of CB validity and incentive compatibility. Focusing on hypothetical bias and framing effects, we design an incentive-compatible decision mechanism that examines the validity of CB in economic experiments. We find hypothetical bias associated with an overstatement of quantities in CB responses, but the overstatement does not appear to arise from strategic behavior. We also find that overstating quantities is not significantly affected by framing, but framing does affect the convergence of CB and revealed preference responses. These findings raise questions about the validity CB research and its demand revealing properties but provide some avenues to address these concerns.

Introduction

The methods and conditions under which stated preference (SP) research produces valid preference and value estimates have received considerable attention in the economic literature. Issues, such as, hypothetical bias, the willingness-to-pay (WTP) and willingness-to-accept (WTA) disparity, and sensitivity to scope have challenged economists to gain additional insights into contingent valuation and other stated preference methods (Carson, 2012; Haab et al., 2013; Hausman, 2012; Kling et al., 2012). Nevertheless, SP methods are widely applied to elicit economic values and preferences when revealed preference or market data and prices are not available. The importance of SP research to real-world applications has motivated researchers to assess the validity of the methods and develop best practice recommendations for stated preference practitioners to provide as accurate as possible value estimates (Bishop and Boyle, 2017; Johnston et al., 2017).

Three common SP methods that elicit values and preferences from survey responses are contingent valuation (CV), discrete choice experiments (CE), and contingent behavior (CB). The CV method relies on respondents making a single binary choice (for or against) on a proposed change at a given cost (Johnston et al., 2017). The CE method has respondents choose among two or more alternatives with multiple attributes and costs (Johnston et al., 2017).1 Finally, with the CB method, respondents indicate intended behavior in quantities (e.g. number of park visits) given a proposed change.2 CB scenarios could be presented in ways similar to CV scenarios or CE scenarios. Regardless of scenario presentation, CB always asks for intended behavior in quantities or frequencies (Englin and Cameron, 1996), rather than a binary vote in CV or a choice from two or more options in CE. As a result, the focus of CV/CE and CB are different: CV and CE focus on valuation (e.g. how much one would be willing to pay) while CB method focuses on behavior (e.g. how many trips one would take). Given the different focuses, CB has different applications than CV/CE methods. CV and CE are widely used to elicit use and non-use values and remain the only available economic approaches to elicit non-use values of goods such as ecosystem services when revealed preference (RP) data (i.e. preferences revealed by actual observations or choices) are not available (Johnston et al., 2017). In recreation demand models, CB data are mostly used in combination with RP data to construct economic values of recreation resources (e.g. Grijalva et al., 2002; Nobel et al., 2020; Bertram et al., 2020; Lloyd-Smith et al. 2019a), by adding more variation in attributes or travel costs and addressing potential omitted variable bias for model estimation (Englin and Cameron, 1996; Yi and Herriges, 2017). In food demand or marketing studies, CB is used to assess market demand for new attributes or products that consumers purchase in multiples (e.g. milk in Kuperis et al., 1999; beverages in Yang et al., 2020). Notwithstanding these differences, CB shares its hypothetical nature with CV and CE, and therefore also shares the potential issues concerning its validity.

Validity assessments to examine the unbiasedness of SP estimates are different among SP methods depending on the focus and methods. Researchers have paid more attention to assessing validity of CV and CE methods due to their unique role in eliciting non-use values for welfare analyses. Efforts have been made to evaluate criterion validity, construct validity, and convergent validity of CV and CE in game-theoretical models of incentive compatibility (e.g. Carson and Groves 2007; Vossler et al., 2012; Carson et al., 2014), empirical evidence with survey and experimental approaches (e.g. Ryan and Watson 2009; Heberlein et al., 2005; Doyon and Bergeron 2016; List 2001), and meta analysis (Penn and Hu, 2018). Yet, only a few studies evaluate construct validity and convergent validity of CB methods with stated preference surveys (e.g. Whitehead et al., 2010; Atkinson and Whitehead 2015; Grijalva et al., 2002; Whitehead et al., 2014; Jeon and Herriges 2010) and no studies, to the best of our knowledge, have assessed criterion and convergent validity of CB methods using economic experiments.

The objective of this study is to assess the validity of stated preference contingent behavior methods experimentally. We design an economic experiment with an incentive-compatible decision mechanism to elicit intended behavior in quantities as a basis for criterion validity assessment in a between-subject design. To detect hypothetical bias, we relax the condition of incentive compatibility – a necessary condition for criterion validity assessments – in a treatment with subjunctive and nondirectional framing that is adapted from SP surveys. We also test whether strategic overstating can be induced by directional framing to increase provision probability (Lloyd-Smith and Adamowicz, 2018; Lusk et al., 2007). To assess convergent validity, we use a within-subject design to examine whether hypothetical CB and subsequent incentive-compatible RP responses of the same individuals converge.

We find the existence of hypothetical bias in CB responses as individuals tend to overstate their intended behavior. But strategic behavior does not appear to be the cause of the overstatement as directional framing (provision framing) does not substantially affect the magnitude of hypothetical bias. However, framing affects the convergence of CB and subsequent RP responses – they converge in the directional framing treatment but do not converge in the nondirectional framing case. We also find that non-convergence is mainly driven by participants who did not have prior experience with the good/attribute used in the decisions. These findings on criterion and convergent validity highlight the importance of incentive compatibility and framing and they have implications for researchers for improving CB validity in survey design, sampling, and data analysis.

By using experimental methods to assess the validity of CB responses, this paper makes contributions to the experimental economics and stated preference literatures. Specifically, these contributions fall into three categories. First, we formally design an incentive-compatible decision mechanism that elicits individual preferences by asking quantity decisions – much of the experimental literature, and particularly auctions, focuses on value elicitation. Second, this study, to the best of our knowledge, is the first to test criterion validity of contingent behavior responses by using an incentive-compatible design with real money payments. Third, we provide experimental evidence on convergent validity of CB responses in a controlled environment in contrast to previous studies that have tested convergent validity with stated preference surveys. Altogether, by examining criterion and convergent validity in the same experimental setting, we draw attention to the selection of appropriate tests/approaches to assess the validity of SP methods.

This paper proceeds as follows. In Section 2, we present a survey of relevant literature that motivates the experimental design. Section 3 provides a detailed description on experimental design and procedure. Section 4 presents main results of the experiments. This is followed by conclusion and discussion in Section 5.

Section snippets

Literature review

Validity of stated preference methods refers to minimizing the bias in estimating the “true value.” However, as the “true value” is often not readily observable, validity of stated preference methods is mostly assessed in the three aspects: criterion validity, content validity, and construct validity (Bishop and Boyle 2017). Criterion validity considers comparing estimates from a stated preference task with a presumably true value or criterion that usually involves real money payments (Kling et

Decision mechanism

We propose the following mechanism to elicit individual preferences: (a) individuals make multiple decisions concerning the quantity of a good they are willing to purchase at various per-unit prices; (b) one of their decisions is then randomly selected for the actual transaction – individuals purchase the indicated quantity at the given per-unit price.

The main difference between this decision mechanism and most auction mechanisms is that individuals decide on a quantity as opposed to a value

Results

Fig. 2 presents demand curves (average quantities of chocolates against unit prices) at each stage by treatment groups.9 Table B1 in Appendix B reports the associated mean values and standard deviations of quantities. For all groups, we see an upward shift of

Discussion

We find mixed results regarding criterion and convergent validity from the experiments. Using the same incentive-compatible decision mechanism as the standard (i.e. control group, criterion, or RP responses), we focus on between-group comparisons to examine criterion validity and within-group comparisons to understand convergent validity. Results on criterion validity show that hypothetical bias associated with an overstatement of intended behavior exists in CB responses. While provision

Conclusion

In this paper, we examine the validity of stated preference contingent behavior responses using economic experiments, with the focus on criterion and convergent validity associated with private goods. In order to assess criterion validity, we first propose an incentive-compatible decision mechanism as the criterion. The decision mechanism elicits individuals' preferences by asking for the number of units of a good they want to buy given a per-unit price. The incentive compatibility of the

Declaration of competing interest

The authors declare that they have no conflict of interest.

Acknowledgement

We would like to thank the following individuals and groups for their valuable comments and suggestions on this paper: two anonymous reviewers and the co-editor, participants at the Department of Resource Economics and Environmental Sociology (University of Alberta) seminar and the 2021 AERE Summer Conference. We would like to thank Zhanji Zhang and Kalli Herlein for their help with this research. We gratefully acknowledge the funding support from Genome Canada, Alberta Agriculture and Forestry

References (54)

  • G.M. Becker et al.

    Measuring utility by a single-response sequential method

    Behav. Sci.

    (1964)
  • S. Bergeron et al.

    Strategic response: a key to understand how cheap talk works

    Can. J. Agric. Econ. Can. d’agroeconomie

    (2019)
  • C. Bertram et al.

    Contingent behavior and asymmetric preferences for baltic sea coastal recreation

    Environ. Resour. Econ.

    (2020)
  • R.C. Bishop et al.

    Reliability and validity in nonmarket valuation

    Environ. Resour. Econ.

    (2019)
  • R.C. Bishop et al.

    Reliability and validity in nonmarket valuation

  • K. Blumenschein et al.

    Eliciting willingness to pay without bias: evidence from a field experiment

    Econ. J.

    (2008)
  • M. Canavari et al.

    How to run an experimental auction: a review of recent advances

    Eur. Rev. Agric. Econ.

    (2019)
  • R.T. Carson

    Contingent valuation: a practical alternative when prices aren't available

    J. Econ. Perspect.

    (2012)
  • R.T. Carson et al.

    Incentive and informational properties of preference questions

    Environ. Resour. Econ.

    (2007)
  • R.T. Carson et al.

    Consequentiality: a theoretical and experimental exploration of a single binary choice

    J. Assoc. Environ. Resour. Econ.

    (2014)
  • R.G. Cummings et al.

    Unbiased value estimates for environmental goods: a cheap talk design for the contingent valuation method

    Am. Econ. Rev.

    (1999)
  • M. Doyon et al.

    Understanding strategic behavior and its contribution to hypothetical bias when eliciting values for a private good

    Can. J. Agric. Econ.

    (2016)
  • D.W. Elfenbein et al.

    Charity as a substitute for reputation: evidence from an online marketplace

    Rev. Econ. Stud.

    (2012)
  • J. Englin et al.

    Augmenting travel cost models with contingent behavior data Poisson Regression Analyses with Individual Panel Data

    Environ. Resour. Econ.

    (1996)
  • T.C. Grijalva et al.

    Testing the validity of contingent behavior trip responses

    Am. J. Agric. Econ.

    (2002)
  • T.C. Haab et al.

    From hopeless to curious? Thoughts on hausman's “dubious to hopeless” critique of contingent valuation

    Appl. Econ. Perspect. Pol.

    (2013)
  • J. Hausman

    Contingent valuation: from dubious to hopeless

    J. Econ. Perspect.

    (2012)
  • View full text