skip to main content
10.1145/3351095.3372849acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open Access

What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems

Published:27 January 2020Publication History

ABSTRACT

Discriminatory practices in recruitment and hiring are an ongoing issue that is a concern not just for workplace relations, but also for wider understandings of economic justice and inequality. The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Evidence of the extent of this uptake around the globe is scarce, but a recent report estimated that 98% of Fortune 500 companies use Applicant Tracking Systems of some kind in their hiring process, a trend driven by perceived efficiency measures and cost-savings. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of 'bias mitigation' are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective.

In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. These systems have been chosen as they explicitly claim to address issues of discrimination in hiring and, unlike many of their competitors, provide some information about how their systems work that can inform an analysis. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

References

  1. 2011. Equality Act 2010 Statutory Code of Practice Employment. Equality and Human Rights Commission. https://www.equalityhumanrights.com/en/publication-download/employment-statutory-code-practiceGoogle ScholarGoogle Scholar
  2. 2016. Article 29 Working Party Guidelines on consent under Regulation 2016/679 WP259 rev.01.Google ScholarGoogle Scholar
  3. 2016. Charter of Fundamental Rights of the European Union. [2016] OJ C202/1., 389--405 pages. https://eur-lex.europa.eu/legal-content/EN/TXT/?toc=OJ%3AC%3A2016%3A202%3ATOC&uri=uriserv%3AOJ.C_.2016.202.01.0389.01.ENGGoogle ScholarGoogle Scholar
  4. Ifeoma Ajunwa. 2018. The Rise of Platform Authoritarianism. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/rise-platform-authoritarianismGoogle ScholarGoogle Scholar
  5. Ifeoma Ajunwa. 2020. The Paradox of Automation as Anti-Bias Intervention. 41 Cardozo, L. Rev. Forthcoming (2020). https://ssrn.com/abstract=2746078Google ScholarGoogle Scholar
  6. Ifeoma Ajunwa, Kate Crawford, and Jason Schultz. 2016. Limitless Worker Surveillance. 105 Calif. L. Rev. 735 (2017) Rev. 735, 2017 (March 2016). Google ScholarGoogle ScholarCross RefCross Ref
  7. Ajunwa Ifeoma. 2019. Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work. In Work and Labor in the Digital Age, Greene Daniel, Steve P. Vallas, and Anne Kovalainen (Eds.). Research in the Sociology of Work, Vol. 33. Emerald Publishing Limited, 61--91. Google ScholarGoogle ScholarCross RefCross Ref
  8. AlgorithmWatch. 2019. Automating Society. Taking Stock of Automated Decision-Making in the EU. Technical Report. AlgorithmWatch in cooperation with Bertelsmann Stiftung. https://www.algorithmwatch.org/automating-societyGoogle ScholarGoogle Scholar
  9. Julia Angwin and Jeff Larson. 2016. Machine Bias. ProPublica (May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle ScholarGoogle Scholar
  10. Julia Angwin and Noam Scheiber. 2017. Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads. ProPublica (Dec. 2017). https://www.propublica.org/article/facebook-ads-age-discrimination-targetingGoogle ScholarGoogle Scholar
  11. Applied. 2019. Scaling fast: how to get hiring right. Technical Report. https://www.beapplied.com/whitepaper-signupGoogle ScholarGoogle Scholar
  12. Jeremy B. Merrill Ariana Tobin. 2018. Facebook Is Letting Job Advertisers Target Only Men. ProPublica (Sept. 2018). https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-menGoogle ScholarGoogle Scholar
  13. Ananth Balashankar, Alyssa Lees, Chris Welty, and Lakshminarayanan Subramanian. 2019. Pareto-Efficient Fairness for Skewed Subgroup Data. In International Conference on Machine Learning AI for Social Good Workshop. Long Beach, United States, 8.Google ScholarGoogle Scholar
  14. Barocas, Solon and Andrew D. Selbst. 2016. Big Data's Disparate Impact. 104 Calif. L. Rev. 671 (2016), 671--732. Google ScholarGoogle ScholarCross RefCross Ref
  15. Miranda Bogen and Rieke Aaron. 2018. Help Wanted: An Exploration of Hiring Algorithms, Equity and Bias. Technical Report. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20-%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdfGoogle ScholarGoogle Scholar
  16. Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 77--91. http://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle ScholarGoogle Scholar
  17. Peter Cappelli. 2019. Data Science Can't Fix Hiring (Yet). Harvard Business Review (May 2019). https://hbr.org/2019/05/recruitingGoogle ScholarGoogle Scholar
  18. Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. (July 2018). https://arxiv.org/abs/1808.00023Google ScholarGoogle Scholar
  20. Lina Dencik, Fieke Jansen, and Philippa Metcalfe. 2018. A conceptual framework for approaching social justice in an age of datafication. https://datajusticeproject.net/2018/08/30/a-conceptual-framework-for-approaching-social-justice-in-an-age-of-datafication/Google ScholarGoogle Scholar
  21. Lilian Edwards and Michael Veale. 2017. Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For. 16 Duke Law & Technology Review (May 2017), 18--84. https://scholarship.law.duke.edu/dltr/vol16/iss1/2Google ScholarGoogle Scholar
  22. Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A Comparative Study of Fairness-enhancing Interventions in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). ACM, New York, NY, USA, 329--338. event-place: Atlanta, GA, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Seeta Peña Gangadharan and Jędrzej Niklas. 2019. Decentering technology in discourse on discrimination. Information, Communication & Society 22, 7 (June 2019), 882--899. Google ScholarGoogle ScholarCross RefCross Ref
  24. Danielle Gaucher, Justin Friesen, and Aaron C. Kay. 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of Personality and Social Psychology 101, 1 (July 2011), 109--128. Google ScholarGoogle ScholarCross RefCross Ref
  25. Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine 38, 3 (Oct. 2017), 50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Deborah Hellman. 2019. Measuring Algorithmic Fairness. Technical Report. https://papers.ssrn.com/abstract=3418528Google ScholarGoogle Scholar
  27. HireVue. 2019. Bias, AI Ethics, and the HireVue Approach. https://www.hirevue.com/why-hirevue/ethical-aiGoogle ScholarGoogle Scholar
  28. Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (June 2019), 900--915. Google ScholarGoogle Scholar
  29. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2564--2572. http://proceedings.mlr.press/v80/kearns18a.htmlGoogle ScholarGoogle Scholar
  30. Jackie A. Lane and Rachel Ingleby. 2018. Indirect Discrimination, Justification and Proportionality: Are UK Claimants at a Disadvantage? Industrial Law Journal 47, 4 (Dec. 2018), 531--552. Google ScholarGoogle ScholarCross RefCross Ref
  31. Loren Larsen and Benjamin Taylor. 2017. Performance model adverse impact correction. https://patents.google.com/patent/US20170293858A1/en Patent No. US20170293858A1, Filed Sep. 27, 2016, Issued Oct. 12, 2017.Google ScholarGoogle Scholar
  32. Colin Lecher. 2019. How Amazon automatically tracks and fires warehouse workers for 'productivity'. The Verge (April 2019). https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminationsGoogle ScholarGoogle Scholar
  33. Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML's impact disparity require treatment disparity? In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Inc., 8125--8135. http://papers.nips.cc/paper/8035-does-mitigating-mls-impact-disparity-require-treatment-disparity.pdfGoogle ScholarGoogle Scholar
  34. Phoebe Moore and Andrew Robinson. 2016. The quantified self: What counts in the neoliberal workplace. New Media & Society 18, 11 (Dec. 2016), 2774--2792. Google ScholarGoogle ScholarCross RefCross Ref
  35. Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.Google ScholarGoogle Scholar
  36. Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, New York, NY, USA.Google ScholarGoogle Scholar
  37. Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses. 2018. Questioning the assumptions behind fairness solutions. In Critiquing and Correcting Trends in Machine Learning. Montréal, Canada. http://arxiv.org/abs/1811.11293 arXiv: 1811.11293.Google ScholarGoogle Scholar
  38. Seeta Peña Gangadharan, Virginia Eubanks, and Solon Barocas (Eds.). 2014. Data and Discrimination: Collected Essays. Open Technology Institute, New America. https://newamerica.org/documents/945/data-and-discrimination.pdfGoogle ScholarGoogle Scholar
  39. Frida Polli and Julie Yoo. 2019. Systems and methods for data-driven identification of talent. https://patents.google.com/patent/US20190026681A1/en Patent No. US20190026681A1, Filed Jun. 20, 2018, Issued Jan. 24, 2019.Google ScholarGoogle Scholar
  40. Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Vol. Accepted Papers. ACM. http://arxiv.org/abs/1906.09208 arXiv: 1906.09208.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Adler-Bell Sam and Miller Michelle. 2018. The Datafication of Employment. Technical Report. The Century Foundation. https://tcf.org/content/report/datafication-employment-surveillance-capitalism-shaping-workers-futures-without-knowledge/Google ScholarGoogle Scholar
  42. Malcolm Sargeant. 2017. Discrimination and the Law (2nd edition ed.). Routledge, London.Google ScholarGoogle Scholar
  43. Andrew D. Selbst, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19), Vol. Accepted Papers. ACM, Atlanta, GA, USA, 59--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Andrew D. Selbst and Julia Powles. 2017. Meaningful information and the right to explanation. International Data Privacy Law 7, 4 (Dec. 2017), 233--242. Google ScholarGoogle ScholarCross RefCross Ref
  45. Jon Shields. 2018. Over 98% of Fortune 500 Companies Use Applicant Tracking Systems (ATS). https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/Google ScholarGoogle Scholar
  46. Haroon Siddique. 2019. Minority ethnic Britons face 'shocking' job discrimination. The Guardian (Jan. 2019). https://www.theguardian.com/world/2019/jan/17/minority-ethnic-britons-face-shocking-job-discriminationGoogle ScholarGoogle Scholar
  47. Javier Sánchez-Monedero and Lina Dencik. 2018. How to (partially) evaluate automated decision systems. Technical Report. Cardiff University. 15 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2018/12/WP-How-to-evaluate-automated-decision-systems.pdfGoogle ScholarGoogle Scholar
  48. Javier Sánchez-Monedero and Lina Dencik. 2019. The datafication of the workplace. Technical Report. Cardiff University. 48 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2019/05/Report-The-datafication-of-the-workplace.pdfGoogle ScholarGoogle Scholar
  49. Benjamin Taylor and Loren Larsen. 2017. Model-driven evaluator bias detection. https://patents.google.com/patent/US9652745B2/en Patent No. US9652745B2, Filed Nov. 17, 2014, Issued May 16, 2017.Google ScholarGoogle Scholar
  50. Kyla Thomas. 2018. The Labor Market Value of Taste: An Experimental Study of Class Bias in U.S. Employment. Sociological Science 5 (Sept. 2018), 562--595. Google ScholarGoogle ScholarCross RefCross Ref
  51. Uber Technologies Inc. 2019. Uber Privacy. https://privacy.uber.com/policy/Google ScholarGoogle Scholar
  52. US EEOC. 1979. Adoption of Questions and Answers To Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. Text VOL. 44, NO. 43. The U.S. Equal Employment Opportunity Commission. https://www.eeoc.gov/policy/docs/qanda_clarify_procedures.htmlGoogle ScholarGoogle Scholar
  53. Michael Veale and Lilian Edwards. 2018. Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review 34, 2 (April 2018), 398--404. Google ScholarGoogle ScholarCross RefCross Ref
  54. Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (FairWare '18). ACM, New York, NY, USA, 1--7. event-place: Gothenburg, Sweden. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7, 2 (June 2017), 76--99. Google ScholarGoogle ScholarCross RefCross Ref
  56. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2018).Google ScholarGoogle Scholar
  57. Judy Wajcman. 2017. Automation: is it really different this time? The British Journal of Sociology 68, 1 (2017), 119--127. Google ScholarGoogle ScholarCross RefCross Ref
  58. Julie Yoo. 2017. Pymetrics with Dr. Julie Yoo. https://www.youtube.com/watch?v=9fF1FDLyEmMGoogle ScholarGoogle Scholar
  59. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. 2019. Fairness Constraints: A Flexible Approach for Fair Classification. Journal of Machine Learning Research 20, 75 (2019), 1--42. http://jmlr.org/papers/v20/18-262.htmlGoogle ScholarGoogle Scholar
  60. Nora Zelevansky. 2019. The Big Business of Unconscious Bias. The New York Times (Nov. 2019). https://www.nytimes.com/2019/11/20/style/diversity-consultants.htmlGoogle ScholarGoogle Scholar

Index Terms

  1. What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader