Skip to main content

Advertisement

Log in

Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures

Journal of Business Ethics Aims and scope Submit manuscript

Abstract

Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted human processes. This effect persists regardless of whether the outcome is favorable to the applicant or not. A potential mechanism underlying algorithm resistance is the belief that algorithms will not be able to recognize their uniqueness as a candidate. Although the use of algorithms has several benefits for organizations such as improved efficiency and bias reduction, our results highlight a potential cost of using them to screen potential employees during recruitment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. In a 2022 survey, 66% of recruiters use automated processes in the recruitment process, and more than 70% of millennial applicants suspected that companies used AI in the recruitment process (Stefanowicz, 2022).

  2. By “AI-enabled hiring,” we refer to systems in which an algorithm processes information provided by candidates, evaluates candidates based on criteria set by the recruiter, and forwards this evaluation to the recruiter. The algorithm may not necessarily collect additional data about the candidate, nor process the data using machine learning. As a result, “Algorithm-driven”, “AI-assisted”, and “AI-enabled” are used interchangeably in this article.

  3. A third type of justice, interactional justice, refers to the fairness of the interpersonal treatment employees receive from organizational decision-makers, such as their supervisors (Kwon et al., 2008). In the recruiting context, it could mean whether candidate believed that were they treated fairly in their interactions with recruiters. We do not focus on this form of justice in this research as our participants did not interact with any recruiter.

References

  • Alder, G. S., & Gilbert, J. (2006). Achieving ethics and fairness in hiring: Going beyond the law. Journal of Business Ethics, 68(4), 449–464.

    Article  Google Scholar 

  • Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. Leadership Quarterly, 21(6), 1086–1120.

    Article  Google Scholar 

  • Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2014). Causality and endogeneity: Problems and solutions. Oxford Handbook of Leadership and Organizations, 1, 93–117.

    Google Scholar 

  • Arvey, R. D., & Renz, G. L. (1992). Fairness in the selection of employees. Journal of Business Ethics, 11(5), 331–340.

    Article  Google Scholar 

  • Ball, K. (2010). Workplace surveillance: An overview. Labor History, 51(1), 87–106.

    Article  Google Scholar 

  • Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Pesonality and Social Psychology, 51(6), 1173–1182.

    Article  Google Scholar 

  • Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4), 991–1013.

    Article  Google Scholar 

  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.

    Article  Google Scholar 

  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 Chi Conference on human factors in computing systems (pp. 1–14).

  • Black, J. S., & van Esch, P. (2020). AI-enabled recruiting: What is it and how should a manager use it? Business Horizons, 63(2), 215–226.

    Article  Google Scholar 

  • Bound, J., Jaeger, D. A., & Baker, R. M. (1995). Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. Journal of the American Statistical Association, 90(430), 443–450.

    Google Scholar 

  • Brewer, M. B. (1991). The social self: On being the same and different at the same time. Personality and Social Psychology Bulletin, 17(5), 475–782.

    Article  Google Scholar 

  • Ciancetta, L. M., & Roch, S. G. (2021). Backlash in performance feedback: Deepening the understanding of the role of gender in performance appraisal. Human Resource Management, 60(4), 641–657.

    Article  Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum.

    Google Scholar 

  • Colquitt, J. A., Conlon, D. E., Wesson, M. J., & Porter, C. O. L. H. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425–445.

    Article  Google Scholar 

  • Colquitt, J. A., Scott, B. A., Rodell, J. B., Long, D. M., Zapata, C. P., Conlon, D. E., & Wesson, M. J. (2013). Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives. Journal of Applied Psychology, 98(2), 199–236.

    Article  Google Scholar 

  • Conlon, D. E., Porter, C. O., & Parks, J. M. (2004). The fairness of decision rules. Journal of Management, 30(3), 329–349.

    Article  Google Scholar 

  • Cowgill, B. (2021). Bias and productivity in humans and algorithms: Theory and evidence from resume screening. Working paper, Columbia Business School.

  • Cropanzano, R., Bowen, D. E., & Gilliland, S. W. (2007). The management of organizational justice. Academy of Management Perspectives, 21(4), 34–48.

    Article  Google Scholar 

  • Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s mechanical Turk as a tool for experimental behavioral research. PLoS ONE, 8(3), e57410.

    Article  Google Scholar 

  • Demuijnck, G. (2009). Non-discrimination in human resources management as a moral obligation. Journal of Business Ethics, 88(1), 83–101.

    Article  Google Scholar 

  • Diekmann, K. A., Samuels, S. M., Ross, L., & Bazerman, M. H. (1997). Self-interest and fairness in problems of resource allocation: Allocators versus recipients. Journal of Personality and Social Psychology, 72(5), 1061–1074.

    Article  Google Scholar 

  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2014). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 143(6), 1–13.

    Google Scholar 

  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2016). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170.

    Article  Google Scholar 

  • Dineen, B. R., Noe, R. A., & Wang, C. (2004). Perceived fairness of web-based applicant screening procedures: Weighing the rules of justice and the role of individual differences. Human Resource Management, 43(2–3), 127–145.

    Article  Google Scholar 

  • Donaldson, T., & Dunfee, T. W. (1995). Integrative social contracts theory: A communitarian conception of economic ethics. Economics & Philosophy, 11(1), 85–112.

    Article  Google Scholar 

  • dos Santos, N. R., Pais, L., Leitão, C. C., & Passmore, J. (2017). Ethics in recruitment and selection. In H. Goldstein, E. Pulakos, J. Passmore, & C. Semedo (Eds.), The Wiley Blackwell handbook of the psychology of recruitment, selection and employee retention (pp. 91–112). John Wiley & Sons.

    Chapter  Google Scholar 

  • Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60.

    Article  Google Scholar 

  • Enderle, G. (2021). Corporate responsibility for wealth creation and human rights. Cambridge University Press.

    Book  Google Scholar 

  • Fazelpour, S., & Lipton, Z. C. (2020, February). Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 57–63).

  • Feng, Z., Liu, Y., Wang, Z., & Savani, K. (2020). Let’s choose one of each: Using the partition dependence bias to increase diversity in hiring decisions. Organizational Behavior and Human Decision Processes, 158, 11–26.

    Article  Google Scholar 

  • Figueroa-Armijos, M., Clark, B. B., & da Motta Veiga, S. P. (2022). Ethical perceptions of AI in hiring and organizational trust: The role of performance expectancy and social influence. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05166-2

    Article  Google Scholar 

  • Folger, R., & Konovsky, M. A. (1989). Effects of procedural and distributive justice on reactions to pay raise decisions. Academy of Management Journal, 32(1), 115–130.

    Article  Google Scholar 

  • Fromkin, H. L., & Snyder, C. R. (1980). The search for uniqueness and valuation of scarcity. In K. Gergen, M. Greenberg, & R. Willis (Eds.), Social exchange (pp. 57–75). Springer.

    Chapter  Google Scholar 

  • Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U., & Redzepi, A. (2021). The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems, 1–26.

  • Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18(4), 694–734.

    Article  Google Scholar 

  • Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics, 178, 1027–1041.

    Article  Google Scholar 

  • Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619–619.

    Article  Google Scholar 

  • Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.

    Article  Google Scholar 

  • Greenberg, J. (1987). A taxonomy of organizational justice theories. Academy of Management Review, 12(1), 9–22.

    Article  Google Scholar 

  • Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16(2), 399–432.

    Article  Google Scholar 

  • Greenwood, M. (2002). Ethics and HRM: A review and conceptual analysis. Journal of Business Ethics, 36(3), 261–278.

    Article  Google Scholar 

  • Greenwood, M. (2013). Ethical analyses of HRM: A review and research agenda. Journal of Business Ethics, 114(2), 355–366.

    Article  Google Scholar 

  • Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018, April). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference (pp. 903–912).

  • Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19–30.

    Article  Google Scholar 

  • Haas, C. (2019). The price of fairness-A framework to explore trade-offs in algorithmic fairness. In 40th International Conference on Information Systems, ICIS.

  • Hannen, T. (2020). What went wrong with the A-level algorithm? Financial Times. Retrieved Feb 8, 2022 from https://www.ft.com/video/282ecd1f-8402-4bf4-8ee7-3d179ce5fcc2.

  • Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57(3), 639–683.

    Article  Google Scholar 

  • Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. The Guilford Press.

    Google Scholar 

  • Heckman, J. J., & Pinto, R. (2015). Econometric mediation analyses: Identifying the sources of treatment effects from experimentally estimated production technologies with unmeasured and mismeasured inputs. Econometric Reviews, 34(1–2), 6–31.

    Article  Google Scholar 

  • Heckman, J., Pinto, R., & Savelyev, P. (2013). Understanding the mechanisms through which an influential early childhood program boosted adult outcomes. American Economic Review, 103(6), 2052–2086.

    Article  Google Scholar 

  • Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733.

    Article  Google Scholar 

  • Hunkenschroer, A. L., & Kriebitz, A. (2022). Is AI recruiting (un) ethical? A human rights perspective on the use of AI for hiring. AI and Ethics. https://doi.org/10.1007/s43681-022-00166-4

    Article  Google Scholar 

  • Hunkenschroer, A. L., & Lütge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178, 977–1007.

    Article  Google Scholar 

  • IBM. (2019). The role of AI in mitigating bias to enhance diversity and inclusion. Retrieved April 8, 2022 from https://www.ibm.com/downloads/cas/2DZELQ4O.

  • Imai, K., Keele, L., Tingley, D., & Yamamoto, T. (2011). Unpacking the black box of causality: Learning about causal mechanisms from experimental and observational studies. American Political Science Review, 105(4), 765–789.

    Article  Google Scholar 

  • Imai, K., Keele, L., & Yamamoto, T. (2010). Identification, inference and sensitivity analysis for causal mediation effects. Statistical Science, 25(1), 51–71.

    Article  Google Scholar 

  • Imbens, G. W., & Wooldridge, J. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47(1), 5–86.

    Article  Google Scholar 

  • Islam, G., & Greenwood, M. (2022). The metrics of ethics and the ethics of metrics. Journal of Business Ethics, 175(1), 1–5.

    Article  Google Scholar 

  • Jago, A. S., & Laurin, K. (2021). Assumptions about algorithms’ capacity for discrimination. Personality and Social Psychology Bulletin, 1–14.

  • Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38–56.

    Article  Google Scholar 

  • John-Mathews, J. M., Cardon, D., & Balagué, C. (2022). From reality to world. A critical perspective on AI fairness. Journal of Business Ethics, 178, 945–959.

    Article  Google Scholar 

  • Johnson, D. G. (2015). Technology with No Human Responsibility? Journal of Business Ethics, 127(4), 707–715.

    Article  Google Scholar 

  • Johnson, S. K., Hekman, D. R., & Chan, E. T. (2016). If there’s only one woman in your candidate pool, there’s statistically no chance she’ll be hired. Harvard Business Review, 26(4), 1–7.

    Google Scholar 

  • Jordan, J. S., & Turner, B. A. (2008). The feasibility of single-item measures for organizational justice. Measurement in Physical Education and Exercise Science, 12(4), 237–257.

    Article  Google Scholar 

  • Kelley, S. (2022). Employee perceptions of the effective adoption of AI principles. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05051-y

    Article  Google Scholar 

  • Kim, T. W., & Routledge, B. R. (2022). Why a right to an explanation of algorithmic decision-making should exist: A trust-based approach. Business Ethics Quarterly, 32(1), 75–102.

    Article  Google Scholar 

  • Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human decisions and machine predictions. Quarterly Journal of Economics, 133(1), 237–293.

    Google Scholar 

  • Kriebitz, A., & Lütge, C. (2020). Artificial intelligence and human rights: A business ethical assessment. Business and Human Rights Journal, 5(1), 84–104.

    Article  Google Scholar 

  • Kwon, S., Kim, M. S., Kang, S. C., & Kim, M. U. (2008). Employee reactions to gainsharing under seniority pay systems: The mediating effect of distributive, procedural, and interactional justice. Human Resource Management, 47(4), 757–775.

    Article  Google Scholar 

  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1–16.

    Article  Google Scholar 

  • Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for personal integrity. Journal of Business Ethics, 160(2), 377–392.

    Article  Google Scholar 

  • Leventhal, G. S. (1980). What should be done with equity theory? In K. Gergen, M. Greenberg, & R. Willis (Eds.), Social exchange (pp. 27–55). Springer.

    Chapter  Google Scholar 

  • Li, S., Jain, K., & Tzini, K. (2021). When Supervisor Support Backfires: The Link Between Perceived Supervisor Support and Unethical Pro-supervisor Behavior. Journal of Business Ethics, 1–19.

  • Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions in organizational relations. Advances in Organizational Justice, 56(8), 88–96.

    Google Scholar 

  • Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.

    Article  Google Scholar 

  • Lucas, G. M., Knowles, M. L., Gardner, W. L., Molden, D. C., & Jefferis, V. E. (2010). Increasing social engagement among lonely individuals: The role of acceptance cues and promotion motivations. Personality and Social Psychology Bulletin, 36(10), 1346–1359.

    Article  Google Scholar 

  • Lynn, M., & Harris, J. (1997). Individual differences in the pursuit of self-uniqueness through consumption. Journal of Applied Social Psychology, 27(21), 1861–1883.

    Article  Google Scholar 

  • Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850.

    Article  Google Scholar 

  • Martin, K., & Freeman, R. E. (2004). The separation of technology and ethics in business ethics. Journal of Business Ethics, 53(4), 353–364.

    Article  Google Scholar 

  • Martin, K., Shilton, K., & Smith, J. (2019). Business and the ethical implications of technology: Introduction to the symposium. Journal of Business Ethics, 160(2), 307–317.

    Article  Google Scholar 

  • McCarthy, J. M., Bauer, T. N., Truxillo, D. M., Anderson, N. R., Costa, A. C., & Ahmed, S. M. (2017a). Applicant perspectives during selection: A review addressing “So what?”, “What’s new?”, and “Where to next?” Journal of Management, 43(6), 1693–1725.

    Article  Google Scholar 

  • McCarthy, J. M., Bauer, T. N., Truxillo, D. M., Campion, M. C., Van Iddekinge, C. H., & Campion, M. A. (2017b). Using pre-test explanations to improve test-taker reactions: Testing a set of “wise” interventions. Organizational Behavior and Human Decision Processes, 141, 43–56.

    Article  Google Scholar 

  • Messick, D. M., & Sentis, K. P. (1979). Fairness and preference. Journal of Experimental Social Psychology, 15(4), 418–434.

    Article  Google Scholar 

  • Miller, A. P. (2018, July 26). Want less-biased decisions? Use algorithms. Harvard Business Review. Retrieved Dec 12, 2018 from https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms.

  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.

    Article  Google Scholar 

  • Montgomery, J. M., Nyhan, B., & Torres, M. (2018). How conditioning on post-treatment variables can ruin your experiment and what to do about it. American Journal of Political Science, 62(3), 760–775.

    Article  Google Scholar 

  • Morse, L., Teodorescu, M. H. M., Awwad, Y., & Kane, G. C. (2021). Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms. Journal of Business Ethics, 1–13.

  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167.

    Article  Google Scholar 

  • O’Connor, E. P., & Crowley-Henry, M. (2019). Exploring the relationship between exclusive talent management, perceived organizational justice and employee engagement: Bridging the literature. Journal of Business Ethics, 156(4), 903–917.

    Article  Google Scholar 

  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.

    Google Scholar 

  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

    Book  Google Scholar 

  • Pickett, C. L., & Gardner, W. L. (2005). The social monitoring system: Enhanced sensitivity to social cues as an adaptive response to social exclusion. In K. D. Williams, J. P. Forgas, & W. von Hippel (Eds.), The social outcast: Ostracism, social exclusion, rejection, and bullying (pp. 213–226). Psychology Press.

    Google Scholar 

  • Ployhart, R. E., Schmitt, N., & Tippins, N. T. (2017). Solving the supreme problem: 100 years of selection and recruitment at the Journal of Applied Psychology. Journal of Applied Psychology, 102(3), 291–304.

    Article  Google Scholar 

  • Polli, F. (2019). Using AI to eliminate bias from hiring. Harvard Business Review, 29.

  • Promberger, M., & Baron, J. (2006). Do patients trust computers? Journal of Behavioral Decision Making, 19(5), 455–468.

    Article  Google Scholar 

  • Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020, January). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 469–481).

  • Randstad. (2019). Randstad aims to deliver a more human experience with new human forward. Retrieved Oct 2, 2019 from https://www.randstad.com.sg/about-us/news/randstad-aims-to-deliver-a-more-human-experience-with-new-human-forward-approach/.

  • Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK, 10, 236605.

    Google Scholar 

  • Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80(1), 1–28.

    Article  Google Scholar 

  • Rupp, D. E., Folger, R., & Skarlicki, D. P. (2017). A critical analysis of the conceptualization and measurement of organizational justice: Is it time for reassessment? Academy of Management Annals, 11(2), 919–959.

    Article  Google Scholar 

  • Sanchez, R. J., Truxillo, D. M., & Bauer, T. N. (2000). Development and examination of an expectancy-based measure of test-taking motivation. Journal of Applied Psychology, 85(5), 739–750.

    Article  Google Scholar 

  • Scarpello, V., & Campbell, J. P. (1983). Job satisfaction: Are all the parts there? Personnel Psychology, 36, 577–600.

    Article  Google Scholar 

  • Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59–68).

  • Shaver, J. M. (2005). Testing for mediating variables in management research: Concerns, implications, and alternative strategies. Journal of Management, 31(3), 330–353.

    Article  Google Scholar 

  • Skitka, L. J. (2002). Do the means always justify the ends, or do the ends sometimes justify the means? A value protection model of justice reasoning. Personality and Social Psychology Bulletin, 28(5), 588–597.

    Article  Google Scholar 

  • Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2021). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. arXiv preprint: 2103.12016.

  • Stefanowicz, B. (2022, March 23). AI recruitment: The future of hiring or HR’s nightmare? Tidio. Retrieved from https://www.tidio.com/blog/ai-recruitment/

  • Taggar, S., & Kuron, L. K. J. (2016). The toll of perceived injustice on job search self-efficacy and behavior. Career Development International, 21(3), 279–298.

    Article  Google Scholar 

  • Thibaut, J. W., & Walker, L. (1975). Procedural justice: A psychological analysis. Erlbaum.

    Google Scholar 

  • Thibaut, J. W., & Walker, L. (1978). A theory of procedure. California Law Review, 66, 541–566.

    Article  Google Scholar 

  • Uggerslev, K. L., Fassina, N. E., & Kraichy, D. (2012). Recruiting through the stages: A meta-analytic test of predictors of applicant attraction at different stages of the recruiting process. Personnel Psychology, 65(3), 597–660.

    Article  Google Scholar 

  • United Nations: Guiding principles on business and human rights. (2011). Retrieved Aug 17, 2022 from https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf

  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.

    Article  Google Scholar 

  • Weber, L. (2012, January 24). Your Résumé vs. Oblivion. The Wall Street Journal. Retrieved April 8, 2022 from https://www.wsj.com/articles/SB10001424052970204624204577178941034941330.

  • Wettstein, F. (2015). Normativity, ethics, and the UN guiding principles on business and human rights: A critical assessment. Journal of Human Rights, 14(2), 162–182.

    Article  Google Scholar 

  • Wilson, C., Ghosh, A., Jiang, S., Mislove, A., Baker, L., Szary, J., Trindel, K., & Polli, F. (2021, March). Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 666–677).

  • Yeomans, M., Shah, A. K., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403–414.

    Article  Google Scholar 

Download references

Funding

No financial support was received to carry out this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maude Lavanchy.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies involving animals or humans performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Sample Size Determination

To determine the sample size for our experiments, we used power analysis (the power of a statistical test must be sufficient to detect a statistically significant “true” difference between groups). We based our power analysis on an ANOVA (to provide a more conservative sample size than a t-test) with two to four groups depending on the study design and calculated the sample size necessary to detect a medium effect size (f = 0.25; Cohen, 1988) at 80% power (the customary level use in experimental studies) and a 5% significance level. These calculations indicate a minimum sample size of 159 (Study 1), 180 (Study 2 and 4) and 128 (Study 3). The power analysis reveals the sample sizes of our four studies are sufficient to detect a significant (medium) effect. These sample sizes are also in line with other experimental studies published in the Journal of Business Ethics (e.g., Giroux et al., 2022; Li et al., 2021) as well as other fields such as Human Resources (e.g., Ciancetta & Roch, 2021).

Appendix B: Studies Scenario

Study 1

Imagine you have five years of sales experience in the food industry. You exhibit great communication and interpersonal skills at your current company. However, you wish to have a change in your working environment. You are now looking for new opportunities in the sales sector, ideally at an environmentally friendly organization.

You discovered a job offer from MangoPick Inc. This company manufactures non-alcoholic beverages and has one of the fastest growing brands in the industry. Please read the job ad on the next screen carefully:

figure a

Human Decision Manipulation

Before applying for this job, you decide to do some research on MangoPick Inc. After some enquiry about its recruitment process, you learn that the hiring manager would review all resumes submitted and select the most relevant applicants to be interviewed.

AI Decision Manipulation

Before applying for this job, you decide to do some research on MangoPick Inc. After some enquiry about its recruitment process, you learn that a computer program would scan all resumes submitted and automatically select the most relevant applicants to be interviewed. There will be no intervention from the hiring manager at this stage.

Human Decision Assisted by AI Manipulation

Before applying for this job, you decide to do some research on MangoPick Inc. After some enquiry about its recruitment process, you learn that the hiring manager will use a computer program to scan all resumes submitted and automatically provide a list of the most relevant applicants to be interviewed. The hiring manager will then review this subset and call the most relevant applicants in his view.

Study 2

Imagine you are just back from a scuba diving trip in Australia to explore the Great Barrier Reef. However, you have been shocked by the severe bleaching of the coral reefs. Seeing these corals dying has deeply affected you. Even though you like your current job, this trip made you realize how important it was for you to work for a company that shares the same vision of the world as yours. Hence, you decide to search for a new job in an environmentally friendly organization.

After weeks of searching, you discovered the following job offer from MangoPick Inc. This company manufactures non-alcoholic beverages and has one of the fastest growing brands in the industry. MangoPick Inc. is also well-known for investing a significant share of its profits to projects contributing to the preservation of our planet, including a project to save the coral reefs in Australia. In the next screen, you will see the advertisement posted by MangoPick Inc. in a magazine, which you read regularly. Please carefully read the job advertisement.

figure b

After reading the advertisement, you immediately submit an application for the job. It’s been a week since you submitted your application, and you are eagerly awaiting a response from them. At that instant, you notice a new email in your inbox. This email is from MangoPick’s human resources. As you have been looking forward to hearing from them and are excited about it, you open the email immediately.

Positive Outcome Manipulation

Below is the email from MangoPick Inc.:

We have received your application and thank you for your interest in our company. We have rigorously screened your application with great attention. We are delighted to announce that you have been selected to proceed to the next stage of the recruitment process.

We would like to invite you for an interview. Could you send us your availability in the coming weeks so that we can set a date and time convenient for all of us?

You are extremely excited to be shortlisted to the job! However, you only went through the first stage of the recruitment process. To maximize your chance to get the job, you need to prepare for the interview.

Negative Outcome Manipulation

Below is the email from MangoPick Inc.:

We have received your application and thank you for your interest in our company. We have rigorously screened your application with great attention. Despite your interesting background, we regret to inform you that the skills and qualifications of other candidates correspond more closely to the requirements of this position.

Nonetheless, we hope that you will apply again to future MangoPick positions as they arise. Thank you for your time and consideration.

This news devastates you. You can’t believe you are not shortlisted given all your qualifications. To understand what went wrong with your application you decide to do more research on MangoPick’s recruitment process.

Human Decision Manipulation

After some enquiry about MangoPick’s recruitment process, you learn that the hiring manager reviewed all resumes submitted and selected himself the most relevant applicants to be interviewed.

AI Decision Manipulation

After some enquiry about MangoPick’s recruitment process, you learn that a computer program scanned all resumes submitted and automatically selected the most relevant applicants to be interviewed. There was no intervention from the hiring manager at this stage.

Appendix C: Additional Details for Study 2

Removing AI Knowledge Variable

The regression results of Study 2 without the AI knowledge variable are displayed in the Table 4. Removing this variable increases the magnitude of the AI decision coefficient; however, the outcome variable and interaction remain significant and with similar signs and magnitudes. In short, the inclusion of the control does not contradict our main findings (Table

Table 4 Robustness check—Study 2

4).

General AI knowledge questions (* indicates correct response):

1. Google achieved what with its deep learning neural networks?

(a) Code-breaking (b) Weather prediction (c*) Encryption (d) Trend prediction

2. Which one of the below is not a machine learning technique?

(a) Bayesian (b) Deep learning (c*) Habituation (d) Reinforcement

3. IBM’s Watson AI in best known for what?

(a) Driverless technology (b*) Cognitive computing (c) IoT network controlling (d) Predictive maintenance

4. What form of processing is ideal for deep learning?

(a*) Parallel processing (b) Serial processing (c) Sequential processing (d) Data processing

5. Which of the following is a program that allows the computer to simulate conversation with a human being? "Eliza" and "Parry" are early examples of programs that can at least temporarily fool a real human being into thinking they are talking to another person.

(a) Speech Application Program Interface (b*) Chatterbot (c) Speech recognition (d) Amiga

Summary of Responses to Open-Ended Question

See Table

Table 5 Please describe your feeling about this recruitment process?

5.

Appendix D: Mediation and Treatment Effect Decomposition

The objective of mediation analysis is to disentangle the average treatment effect on outcome variables that operates through two channels: (i) indirect effect arising from the effect of the treatment on mediating variables; (ii) direct effect that operates through other channels than changes in the measured input. Randomized controlled trials allow for identification of the causal effect of treatment on measured inputs and outputs, but additional assumptions are needed to identify the causal effect of a mediator on outcome variables.

The standard literature on mediation analysis dealt with the problem of confounding effects by invoking different assumptions. Baron and Kenny’s (1986) traditional approach assumes that both treatment and mediator variable are exogenous. Imai et al. (2010, 2011) consider an alternative non-parametric approach, and invoke a Sequential Ignorability Assumption, which assumes that all confounding variables are observed, and that there is no unobserved mediator. However, these assumptions are rarely satisfied in practice (Heckman & Pinto, 2015; Shaver 2005). It is indeed often not possible to collect data and measure all factors that could have an influence on the equation of interest without error. And our study is no exception. When these underlying assumptions are violated, statistical estimates have undesirable properties and could lead to incorrect conclusions.

Several approaches have been suggested in the literature to limit the adverse effects and provide more meaningful estimates. A common statistical technique developed to deal with this issue is to explicitly model the interdependence between the mediator, the outcome and other variables, using Two-Stage Least Squares (2SLS) or Structural Equation Modeling (SEM), which often require the use of an instrumental variable (IV) (Antonakis et al. 2010, 2014; Shaver, 2005). The downside of this method is that, very often, the “perfect” instrument is not available, and using a weak instrument leads to a similar bias as that of OLS (Bound et al. 1995). As a robustness to the traditional mediation analysis, we follow the methodology proposed by Heckman and Pinto (2015), and applied in Heckman et al. (2013), to decompose treatment effects into direct and indirect effects (i.e., channeled through a mediator). Even though some econometric exogeneity and linearity assumptions are still necessary, the decomposition strategy they propose is to minimize the problems of endogeneity plaguing the mediation methods mentioned above.

Heckman and Pinto (2015) decompose a linear model into measured and unmeasured components as follows:

$$\begin{gathered} Y_{d} = \kappa_{d} + \mathop \sum \limits_{{j \in {\mathcal{J}}}} \alpha_{d}^{j} \theta_{d}^{j} + \beta_{d} X + \tilde{ \in }_{d} \\ = \kappa_{d} + \underbrace {{\alpha_{d}^{p} \theta_{d}^{p} }}_{{\text{proxied input}}} + \underbrace {{\mathop \sum \limits_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} \theta_{d}^{j} }}_{{\text{unmeasured inputs}}} + \beta_{d} X + \tilde{ \in }_{d} \\ = \left( {\kappa_{d} - \mathop \sum \limits_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} E\left( {\theta_{d}^{j} } \right)} \right) + \alpha_{d}^{p} \theta_{d}^{p} + \beta_{d} X + \left[ {\tilde{ \in }_{d} + \mathop \sum \limits_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} \left( {\theta_{d}^{j} - E\left( {\theta_{d}^{j} } \right)} \right)} \right] \\ = \tau_{d} + \alpha_{d}^{p} \theta_{d}^{p} + \beta_{d} X + \in_{d} \\ \end{gathered}$$
(1)

where \(Y_{d}\) is the outcome variable, \(\tau_{d} = \kappa_{d} + \sum_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} E\left( {\theta_{d}^{j} } \right)\) is a constant, \(\alpha_{d}\) is a |\({\mathcal{J}}\)|-dimensional vector of inputs, \(\theta_{d}^{p}\) is our proxied input—our mediator (i.e., whether the recruitment process is able to identify unique characteristics), \(\beta_{d}\) is a |\(X\)|-dimensional vector of pre-treatment variables, X are pre-treatment control variable, \(\tilde{ \in }_{d}\) is a zero-mean error term assumed to be independent of regressors \(\theta_{d}\) and X, \(d \in \left\{ {0,1} \right\}\) is the treatment indicator, \(\tau_{d} = \kappa_{d} + \sum_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} E\left( {\theta_{d}^{j} } \right)\) and \(\in_{d} = \tilde{ \in }_{d} + \sum_{{j \in {\mathcal{J}}\backslash {\mathcal{J}}^{p} }} \alpha_{d}^{j} \left( {\theta_{d}^{j} - E\left( {\theta_{d}^{j} } \right)} \right)\), which is a zero-mean error term. Hence, the error term \(\in_{d}\) will be correlated with the proxied/measured input if these measured inputs are correlated with unmeasured inputs.

Then, to decompose the treatment effects into components attributable to change in our proxied inputs (\(\Delta \theta = \theta_{1} - \theta_{0} )\) and change in parameters (\(\Delta \alpha = \alpha_{1} - \alpha_{0} )\), it is necessary to assume that changed in unmeasured inputs attributable to the experiment are independent of X:

$$\begin{gathered} E\left( {Y_{1} - Y_{0} |X} \right) = \left( {\tau_{1} - \tau_{0} } \right) + E\left( {\alpha_{1} \theta_{1}^{p} - \alpha_{0} \theta_{0}^{p} } \right) + \left( {\beta_{1} - \beta_{0} } \right)X \\ = \underbrace {{\left( {\tau_{1} - \tau_{0} } \right)}}_{{\text{direct effect}}} + \underbrace {{\left( {\Delta \alpha + \alpha_{0} } \right)E\left( {\Delta \theta^{p} } \right) + \left( {\Delta \alpha } \right)E\left( {\theta_{0}^{p} } \right)}}_{{\text{indirect effect}}} + \underbrace {{\left( {\beta_{1} - \beta_{0} } \right)X}}_{{{\text{other}}}} \\ \end{gathered}$$
(2)

where \(Y_{1}\) and \(Y_{0}\) represent the outcome variable under the treatment and control conditions, \((\tau_{1} - \tau_{0} )\) is the average difference between the treatment and control groups that is not attributable to measured inputs, \(\theta_{d}\) describes our mediator (i.e., whether the recruitment process is able to identify unique characteristics) and X are pre-treatment control variables. This equation can be simplified if the structural invariance or autonomy assumptions is satisfied, that is if \(\beta_{1} = \beta_{0}\) and \(\alpha_{1} = \alpha_{0}\). As explained in Heckman and Pinto (2015), if measured and unmeasured inputs are independent, these parameters can be consistently estimated by OLS and tested. Wald tests revealed that only the model coefficients associated to pre-treatment variables X are the same for both the treatment and control groups (i.e., \(\beta_{1} = \beta_{0}\), χ2(16) = 14.98, p = 0.526 but \(\alpha_{1} \ne \alpha_{0}\), χ2(1) = 3.79, p = 0.052). Equation (2) then rewrites:

$$E\left( {Y_{1} - Y_{0} } \right) = \left( {\tau_{1} - \tau_{0} } \right) + \left( {\Delta \alpha + \alpha_{0} } \right)E\left( {\theta_{1}^{p} - \theta_{0}^{p} } \right) + \left( {\Delta \alpha } \right)E\left( {\theta_{0}^{p} } \right)$$
(3)

And the outcome equation to be estimated using standard linear regression, comprising both treatment groups becomes:

$$Y = \tau_{0} + \phi D + \alpha \theta^{p} + \omega \theta^{p} \cdot D + \beta X + \eta$$
(4)

Hence, the treatment effects channeled through the mediator originates from (i) the impact of the mediator on the outcome; (ii) the enhancement of the mediator by the intervention. Given the assumptions that \(\theta^{p}\) is measured without error and is independent of the error term \(\in\), Heckman and Pinto (2015) showed that least squares estimators of the parameters of Eq. (4) are unbiased. We estimate all parameters in these decompositions through a series of regression steps:

  1. 1.

    We regress our mediator on the treatment indicator and the vector of pre-treatment variables

    $$\theta_{i}^{p} = \delta_{0} + \delta_{1} D_{i} + \delta_{2} X_{i} + \upsilon_{i} \quad i = 1 \ldots N$$
    (5)

    This estimation step yields the mediator mean \(E\left( {\theta_{0}^{p} } \right) = \hat{\delta }_{0} + \hat{\delta }_{2} E\left( X \right)\) and the expected change in our mediator from the treatment conditional on X: \(E\left( {\theta_{1}^{p} - \theta_{0}^{p} } \right) = \hat{\delta }_{1}\).

  2. 2.

    We regress Eq. (4) to obtain:

    1. a.

      The direct effect: \(\left( {\tau_{1} - \tau_{0} } \right) = \hat{\phi }\)

    2. b.

      Parameters \({\Delta }\alpha = \hat{\omega }\) and \(\alpha_{0} = \hat{\alpha }\)

  3. 3.

    Regression results from the two previous steps are combined to calculate the decomposition of treatment effect:

    1. a.

      The direct effect: \(\left( {\tau_{1} - \tau_{0} } \right) = \hat{\phi }\)

    2. b.

      The indirect effect:\(\left( {\Delta \alpha + \alpha_{0} } \right)E\left( {\theta_{1}^{p} - \theta_{0}^{p} } \right) + {\Delta }\alpha E\left( {\theta_{0}^{p} } \right) = \left( {\hat{\omega } + \hat{\alpha }} \right)\hat{\delta }_{1} + \hat{\omega }[\hat{\delta }_{0} + \hat{\delta }_{2} E\left( X \right)]\)

Estimation results are shown in Table

Table 6 Study 3: Regression table based on Heckman and Pinto’s (2015) methodology

6. As in Heckman et al. (2013), coefficients whose one-sided bootstrap p-values below 0.1 are considered to decompose the treatment effect. Unbiasedness of this decomposition relies on the key assumption that measured and unmeasured input are independent. However, note that, while Heckman and Pinto (2015) used factor analysis to aggregate measures and account for measurement error, we do not have enough indicators of our concept to do so. Hence, our estimates may suffer from attenuation bias induced by uncorrected measurement error.

Appendix E: Sample Size Representativeness

See Table

Table 7 Study samples key demographic summary statistics (average) vs. U.S. population

7.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lavanchy, M., Reichert, P., Narayanan, J. et al. Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures. J Bus Ethics 188, 125–150 (2023). https://doi.org/10.1007/s10551-022-05320-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10551-022-05320-w

Keywords

JEL Classification

Navigation