ABSTRACT
Discriminatory practices in recruitment and hiring are an ongoing issue that is a concern not just for workplace relations, but also for wider understandings of economic justice and inequality. The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Evidence of the extent of this uptake around the globe is scarce, but a recent report estimated that 98% of Fortune 500 companies use Applicant Tracking Systems of some kind in their hiring process, a trend driven by perceived efficiency measures and cost-savings. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of 'bias mitigation' are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective.
In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. These systems have been chosen as they explicitly claim to address issues of discrimination in hiring and, unlike many of their competitors, provide some information about how their systems work that can inform an analysis. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.
- 2011. Equality Act 2010 Statutory Code of Practice Employment. Equality and Human Rights Commission. https://www.equalityhumanrights.com/en/publication-download/employment-statutory-code-practiceGoogle Scholar
- 2016. Article 29 Working Party Guidelines on consent under Regulation 2016/679 WP259 rev.01.Google Scholar
- 2016. Charter of Fundamental Rights of the European Union. [2016] OJ C202/1., 389--405 pages. https://eur-lex.europa.eu/legal-content/EN/TXT/?toc=OJ%3AC%3A2016%3A202%3ATOC&uri=uriserv%3AOJ.C_.2016.202.01.0389.01.ENGGoogle Scholar
- Ifeoma Ajunwa. 2018. The Rise of Platform Authoritarianism. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/rise-platform-authoritarianismGoogle Scholar
- Ifeoma Ajunwa. 2020. The Paradox of Automation as Anti-Bias Intervention. 41 Cardozo, L. Rev. Forthcoming (2020). https://ssrn.com/abstract=2746078Google Scholar
- Ifeoma Ajunwa, Kate Crawford, and Jason Schultz. 2016. Limitless Worker Surveillance. 105 Calif. L. Rev. 735 (2017) Rev. 735, 2017 (March 2016). Google ScholarCross Ref
- Ajunwa Ifeoma. 2019. Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work. In Work and Labor in the Digital Age, Greene Daniel, Steve P. Vallas, and Anne Kovalainen (Eds.). Research in the Sociology of Work, Vol. 33. Emerald Publishing Limited, 61--91. Google ScholarCross Ref
- AlgorithmWatch. 2019. Automating Society. Taking Stock of Automated Decision-Making in the EU. Technical Report. AlgorithmWatch in cooperation with Bertelsmann Stiftung. https://www.algorithmwatch.org/automating-societyGoogle Scholar
- Julia Angwin and Jeff Larson. 2016. Machine Bias. ProPublica (May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
- Julia Angwin and Noam Scheiber. 2017. Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads. ProPublica (Dec. 2017). https://www.propublica.org/article/facebook-ads-age-discrimination-targetingGoogle Scholar
- Applied. 2019. Scaling fast: how to get hiring right. Technical Report. https://www.beapplied.com/whitepaper-signupGoogle Scholar
- Jeremy B. Merrill Ariana Tobin. 2018. Facebook Is Letting Job Advertisers Target Only Men. ProPublica (Sept. 2018). https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-menGoogle Scholar
- Ananth Balashankar, Alyssa Lees, Chris Welty, and Lakshminarayanan Subramanian. 2019. Pareto-Efficient Fairness for Skewed Subgroup Data. In International Conference on Machine Learning AI for Social Good Workshop. Long Beach, United States, 8.Google Scholar
- Barocas, Solon and Andrew D. Selbst. 2016. Big Data's Disparate Impact. 104 Calif. L. Rev. 671 (2016), 671--732. Google ScholarCross Ref
- Miranda Bogen and Rieke Aaron. 2018. Help Wanted: An Exploration of Hiring Algorithms, Equity and Bias. Technical Report. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20-%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdfGoogle Scholar
- Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 77--91. http://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle Scholar
- Peter Cappelli. 2019. Data Science Can't Fix Hiring (Yet). Harvard Business Review (May 2019). https://hbr.org/2019/05/recruitingGoogle Scholar
- Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--14. Google ScholarDigital Library
- Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. (July 2018). https://arxiv.org/abs/1808.00023Google Scholar
- Lina Dencik, Fieke Jansen, and Philippa Metcalfe. 2018. A conceptual framework for approaching social justice in an age of datafication. https://datajusticeproject.net/2018/08/30/a-conceptual-framework-for-approaching-social-justice-in-an-age-of-datafication/Google Scholar
- Lilian Edwards and Michael Veale. 2017. Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For. 16 Duke Law & Technology Review (May 2017), 18--84. https://scholarship.law.duke.edu/dltr/vol16/iss1/2Google Scholar
- Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A Comparative Study of Fairness-enhancing Interventions in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). ACM, New York, NY, USA, 329--338. event-place: Atlanta, GA, USA. Google ScholarDigital Library
- Seeta Peña Gangadharan and Jędrzej Niklas. 2019. Decentering technology in discourse on discrimination. Information, Communication & Society 22, 7 (June 2019), 882--899. Google ScholarCross Ref
- Danielle Gaucher, Justin Friesen, and Aaron C. Kay. 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of Personality and Social Psychology 101, 1 (July 2011), 109--128. Google ScholarCross Ref
- Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine 38, 3 (Oct. 2017), 50. Google ScholarDigital Library
- Deborah Hellman. 2019. Measuring Algorithmic Fairness. Technical Report. https://papers.ssrn.com/abstract=3418528Google Scholar
- HireVue. 2019. Bias, AI Ethics, and the HireVue Approach. https://www.hirevue.com/why-hirevue/ethical-aiGoogle Scholar
- Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (June 2019), 900--915. Google Scholar
- Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2564--2572. http://proceedings.mlr.press/v80/kearns18a.htmlGoogle Scholar
- Jackie A. Lane and Rachel Ingleby. 2018. Indirect Discrimination, Justification and Proportionality: Are UK Claimants at a Disadvantage? Industrial Law Journal 47, 4 (Dec. 2018), 531--552. Google ScholarCross Ref
- Loren Larsen and Benjamin Taylor. 2017. Performance model adverse impact correction. https://patents.google.com/patent/US20170293858A1/en Patent No. US20170293858A1, Filed Sep. 27, 2016, Issued Oct. 12, 2017.Google Scholar
- Colin Lecher. 2019. How Amazon automatically tracks and fires warehouse workers for 'productivity'. The Verge (April 2019). https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminationsGoogle Scholar
- Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML's impact disparity require treatment disparity? In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Inc., 8125--8135. http://papers.nips.cc/paper/8035-does-mitigating-mls-impact-disparity-require-treatment-disparity.pdfGoogle Scholar
- Phoebe Moore and Andrew Robinson. 2016. The quantified self: What counts in the neoliberal workplace. New Media & Society 18, 11 (Dec. 2016), 2774--2792. Google ScholarCross Ref
- Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.Google Scholar
- Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, New York, NY, USA.Google Scholar
- Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses. 2018. Questioning the assumptions behind fairness solutions. In Critiquing and Correcting Trends in Machine Learning. Montréal, Canada. http://arxiv.org/abs/1811.11293 arXiv: 1811.11293.Google Scholar
- Seeta Peña Gangadharan, Virginia Eubanks, and Solon Barocas (Eds.). 2014. Data and Discrimination: Collected Essays. Open Technology Institute, New America. https://newamerica.org/documents/945/data-and-discrimination.pdfGoogle Scholar
- Frida Polli and Julie Yoo. 2019. Systems and methods for data-driven identification of talent. https://patents.google.com/patent/US20190026681A1/en Patent No. US20190026681A1, Filed Jun. 20, 2018, Issued Jan. 24, 2019.Google Scholar
- Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Vol. Accepted Papers. ACM. http://arxiv.org/abs/1906.09208 arXiv: 1906.09208.Google ScholarDigital Library
- Adler-Bell Sam and Miller Michelle. 2018. The Datafication of Employment. Technical Report. The Century Foundation. https://tcf.org/content/report/datafication-employment-surveillance-capitalism-shaping-workers-futures-without-knowledge/Google Scholar
- Malcolm Sargeant. 2017. Discrimination and the Law (2nd edition ed.). Routledge, London.Google Scholar
- Andrew D. Selbst, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19), Vol. Accepted Papers. ACM, Atlanta, GA, USA, 59--68. Google ScholarDigital Library
- Andrew D. Selbst and Julia Powles. 2017. Meaningful information and the right to explanation. International Data Privacy Law 7, 4 (Dec. 2017), 233--242. Google ScholarCross Ref
- Jon Shields. 2018. Over 98% of Fortune 500 Companies Use Applicant Tracking Systems (ATS). https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/Google Scholar
- Haroon Siddique. 2019. Minority ethnic Britons face 'shocking' job discrimination. The Guardian (Jan. 2019). https://www.theguardian.com/world/2019/jan/17/minority-ethnic-britons-face-shocking-job-discriminationGoogle Scholar
- Javier Sánchez-Monedero and Lina Dencik. 2018. How to (partially) evaluate automated decision systems. Technical Report. Cardiff University. 15 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2018/12/WP-How-to-evaluate-automated-decision-systems.pdfGoogle Scholar
- Javier Sánchez-Monedero and Lina Dencik. 2019. The datafication of the workplace. Technical Report. Cardiff University. 48 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2019/05/Report-The-datafication-of-the-workplace.pdfGoogle Scholar
- Benjamin Taylor and Loren Larsen. 2017. Model-driven evaluator bias detection. https://patents.google.com/patent/US9652745B2/en Patent No. US9652745B2, Filed Nov. 17, 2014, Issued May 16, 2017.Google Scholar
- Kyla Thomas. 2018. The Labor Market Value of Taste: An Experimental Study of Class Bias in U.S. Employment. Sociological Science 5 (Sept. 2018), 562--595. Google ScholarCross Ref
- Uber Technologies Inc. 2019. Uber Privacy. https://privacy.uber.com/policy/Google Scholar
- US EEOC. 1979. Adoption of Questions and Answers To Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. Text VOL. 44, NO. 43. The U.S. Equal Employment Opportunity Commission. https://www.eeoc.gov/policy/docs/qanda_clarify_procedures.htmlGoogle Scholar
- Michael Veale and Lilian Edwards. 2018. Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review 34, 2 (April 2018), 398--404. Google ScholarCross Ref
- Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (FairWare '18). ACM, New York, NY, USA, 1--7. event-place: Gothenburg, Sweden. Google ScholarDigital Library
- Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7, 2 (June 2017), 76--99. Google ScholarCross Ref
- Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2018).Google Scholar
- Judy Wajcman. 2017. Automation: is it really different this time? The British Journal of Sociology 68, 1 (2017), 119--127. Google ScholarCross Ref
- Julie Yoo. 2017. Pymetrics with Dr. Julie Yoo. https://www.youtube.com/watch?v=9fF1FDLyEmMGoogle Scholar
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. 2019. Fairness Constraints: A Flexible Approach for Fair Classification. Journal of Machine Learning Research 20, 75 (2019), 1--42. http://jmlr.org/papers/v20/18-262.htmlGoogle Scholar
- Nora Zelevansky. 2019. The Big Business of Unconscious Bias. The New York Times (Nov. 2019). https://www.nytimes.com/2019/11/20/style/diversity-consultants.htmlGoogle Scholar
Index Terms
- What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems
Recommendations
Learning to Hire? Hiring as a Dynamic Experiential Learning Process in an Online Market for Contract Labor
We know a job applicant's social category affects an employer's likelihood of hiring the applicant, but we do not know whether, or how, employers update their beliefs regarding these social categories. I portray hiring as a cognitive decision and ...
Navigating a Black Box: Students’ Experiences and Perceptions of Automated Hiring
ICER '23: Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 1Automated hiring algorithms are increasingly used in computing job recruitment. Prior work has examined perceptions of algorithmic fairness and established bias in hiring algorithms, but there is limited work on the ability of computer science students,...
The Effect of Group Identity on Hiring Decisions with Incomplete Information
We investigate the effects of group identity on hiring decisions with adverse selection problems. We run a laboratory experiment in which employers cannot observe a worker’s ability or verify the veracity of the ability the worker claims to have. We ...
Comments