Skip to main content
Log in

Artificial agents among us: Should we recognize them as agents proper?

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore the implications of granting recognition in this manner. The thesis I will be defending is that artificial agents that do meet the conditions of agency in light of which we ascribe rights to group agents should thereby be recognized as having similar rights. The reason for bringing group agents into the picture is that, like artificial agents, they are not self-evidently agents of the sort to which we would naturally ascribe rights, or at least that is what the historical record suggests if we look, for example, at what it took for corporations to gain legal status in the law as group agents entitled to rights and, consequently, as entities subject to responsibilities. This is an example of agency ascribed to a nonhuman agent, and just as a group agent can be described as nonhuman, so can an artificial agent. Therefore, if these two kinds of nonhuman agents can be shown to be sufficiently similar in relevant ways, the agency ascribed to one can also be ascribed to the other—this despite the fact that neither is human, a major impediment when it comes to recognizing an entity as an agent proper, and hence as a bearer of rights.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In each of these specifications, a right gives one a normative ability to do or not do something: This can be the ability to demand something from someone (rights as claims), or the freedom to do something that is not prohibited (rights as privileges), or the ability to modify a legal situation (rights as powers) or not be subject to the powers of others (rights as immunities). For a discussion, see Hohfeld 1917 and Jones 1994.

  2. A caveat before we proceed is that the thermostat example just introduced should not be taken to mean that an artificial device is rational just because it correctly executes the instructions it is designed to execute. Nor should the motivational states we attribute to it be taken to mean that it somehow “wants” or “intends” to do what it does. The example is rather intended to illustrate that we can explain an agent’s actions as if it were rational and intentional, without saying that it is a rational agent driven by actual intentions.

  3. I should note that the parallel between group agents and artificial agents is not new (see Solum 1992; Singer 2013). List and Pettit (2011) and Pettit (2007) seem to reject that parallel, since they consider the agency of a “bare-bones” artificial agent (a very stripped-down robotic device) in contrast to the full agency of group agents. But as can be appreciated from the way artificial agents were just defined, I understand them to comprise a class much more inclusive than that of robots.

  4. This is a standard position on moral responsibility: See Himma (2009).

  5. I should point out, as previously suggested, that while a capacity for normative judgment is an essential condition subject to which responsibility can be ascribed to an agent, we also have to look at the roles agents play in the environment in which they interact, for this is essential in figuring out the kinds of responsibilities that can be ascribed to them and the consequences that should follow as a result of the agent failing to fulfil those responsibilities. The question of roles is discussed at the end of “Fourth condition of agency: personhood” section.

  6. For other criticisms about the fitness to be held responsible, see Tuomela (2011).

  7. Another example where List and Pettit’s three conditions of responsibility find a counterpart in the law is in the legal concept of force majeure, which excuses a party from responsibility for nonperformance ascribable to events beyond that party’s control.

  8. This structural difference will be taken up in “Structural difference” section.

  9. I should note here that this parallel between neurons and individuals, on the one hand, and individuals and groups, on the other, is itself up for debate. It would be rejected on an incompatibilist view such as hard determinism or metaphysical libertarianism. The former would argue that there is no free will in virtue of which an individual or group agent might control its actions—for that control is only mechanistic (Illes 2005, 45)—such that the question of responsibility wouldn’t arise in the first place. The latter, for its part, would grant that responsibility is an issue, but only for human beings and only if they have “a freedom to originate action uncaused by prior events and influences” (ibid.).

  10. Although it is a fallacy to proceed on a basis of likeness to human beings in ascribing personhood to an agent, there is no denying that humans do react differently in their interaction with a robot when the robot looks human. As the roboticist Daniel Wilson observes (Singer 2009, 405), we unconsciously make judgments based on a robot’s form and “care differently about a humanoid robot versus a dog robot versus a robot that doesn’t look like anything alive.”

  11. Yet another approach to personhood is the interest-based one offered by Briggs (2012), who takes List and Pettit’s view of personhood to mean that “a person is the sort of thing to which it is appropriate to assign conventional rights” (Briggs 2012, 289) and thus suggests that we look to interests as the basis on which to assign those rights, the idea being that it makes no sense to ascribe rights to something (say, a rock) if that thing “cannot benefit from those rights” and so cannot be said to have an interest in them. This idea that something ought to have rights to the extent that it can benefit from them calls up the competence approach (because implicit in that idea is that of an underlying capacity, or ability, to benefit from the rights in question), but at the same time, an interest-based approach would be more restrictive in its ascription of rights than would the inter-relational approach I will be introducing shortly, for if we take interests as a basis of ascription, we may not be able to contemplate the idea of the environment, for example, as having any interest in protection and so as a subject of rights.

  12. Four such approaches are Hubbard (2011) (in which the x variable is personhood itself), Rothblatt (2014) (consciousness), Dennett (2013) (intelligence), and rights (Nussbaum 2006, 2011), and what they all have in common is that, in testing for a quality or property x, they do not ask us to imagine what it would be like to enter into the “mind” of the entity we think it might be ascribable to, but only ask us to consider whether this entity is functionally or operationally capable of acting consistently with what it means to have that quality or property.

  13. The approach “allows for the fact that agency develops over time and shifts the focus to the future appropriate behaviour of complex systems, with moral responsibility being more a matter of rational and socially efficient policy that is largely outcomes-focused” (Galliott 2015, 224).

  14. Interestingly for our purposes, this very same reasoning was anticipated by Chief Justice John Marshall in the landmark case Trustees of Dartmouth College v. Woodward (1819), where it was applied to the concept of a business corporation: “From the nature of things, the artificial person called a corporation, must be created, before it can be capable of taking any thing. When, therefore, a charter is granted, and it brings the corporation into existence without any act of the natural persons who compose it, and gives such corporation any privileges, franchises, or property, the law deems the corporation to be first brought into existence, and then clothes it with the granted liberties and property” (italics added).

  15. Another parallel that can be drawn is between a group agent and a multi-agent system (MAS), a system composed of interacting individual agents (computer systems) acting to achieve a common goal (for an introduction to MASs, see Woolridge 2009). This parallel will not be addressed here because the artificial agents making up an MAS are different from the kinds of agents discussed in this paper.

  16. As Tuomela (2011) has pointed out, the authors do not address the grounds of supervenience—causal, conceptual, or epistemic—and my own discussion of supervenience suffers form the same defect.

  17. There are a number of other theories that take this approach: See the table in List and Pettit (2011, 7).

  18. This is List and Pettit’s way of striking a middle ground between two views of group agency which they term “emergentist” and “eliminativist”: “Where emergentism makes group agents into hyper-realities, eliminativism makes them into non-realities” (List and Pettit 2011, 75). It is not entirely clear, however, how this middle-of-the-road view (epistemological autonomy) can be distinguished from the emergentist view, since List and Pettit use the same exact language to describe both: “From the emergentist tradition,” they note, “it went without saying that group agents were agents in their own right, over and above their members” (ibid. 73); compare that with their own approach, on which “we must think of group agents as relatively autonomous entities—agents in their own right” (ibid., 77), thus defending “the idea that group agents can be agents over and above their individual members” (ibid., 78).

  19. Non-redundant realism is criticized by Sylvan (2012), arguing that group agents can be seen through the lens of a redundant realism.

  20. Consider in this regard the opinion expressed by the computer scientist and inventor Ray Kurzweil (quoted in Greenemeier 2010): “Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.”

  21. Rawls would later be criticized by Habermas (1995, 114) for assimilating rights and liberties to goods—which are more like property, or things you own—but that is a matter that would take us on a long detour, so it cannot be taken up here.

  22. On the historical context in which that judgment and recognition came to be, see Friedman 2005, 136–37. For a broader discussion of corporations as rights-holders, see Clements (2012).

  23. For an overview of the roboethics debate see, for instance, Lin et al. (2012).

  24. The important point here is the emphasis on reasons: As previously mentioned, I am not suggesting that because history or the law evolved as it did in regard to corporations, then we should mimic the same line of development in dealing with artificially intelligent agents. Rather, I am saying that the analogies that group agents (and corporations among them) can be shown to have to artificial agents warrant an investigation aimed at exploring whether the justifications for one development (in the past) are sound and might also justify another development (in the future).

  25. For a critique of Nussbaum’s cosmopolitanism, see Ayaz Naseem and Hyslop-Margison (2006).

References

  • Ayaz Naseem, M., & Hyslop-Margison, E. J. (2006). Nussbaum’s concept of cosmopolitanism: Practical possibility or academic delusion? Paideusis, 15(2), 51–60.

    Google Scholar 

  • Briggs, R. (2012). The normative standing of group agents. Episteme, 9(3), 283–291.

    Article  Google Scholar 

  • Burwell, Secretary of Health and Human Services, et al. v. Hobby Lobby Stores, Inc., et al. 2013. 573 U. S. (2014). http://www.scotusblog.com/case-files/cases/sebelius-v-hobby-lobby-stores-inc/. Accessed 5 Oct 2014.

  • Clements, J. D. (2012). Corporations are not people: Why they have rights that you do and what you can do about it. San Francisco, CA: Berrett-Koehler Publishers.

    Google Scholar 

  • Dennett, D. C. (2009). Intentional systems theory. In B. McLaughlin, A. Beckermann, & S. Walter (Eds.), The Oxford handbook of philosophy of mind (pp. 339–350). Oxford: Oxford University Press.

    Google Scholar 

  • Dennett, D. C. (2013). Intuition pumps and other tools for thinking. New York: W. W. Norton & Company.

    Google Scholar 

  • Dietrich, E. (2011). Homo sapiens 2.0: Building the better robots of our future. In M. Anderson & S. Anderson (Eds.), Machine ethics (pp. 531–541). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Emerson, R., & Hardwicke, J. W. (1997). Business Law. Hauppauge, NY: Barron’s educational series.

    Google Scholar 

  • Fischer, M. (2007). A pragmatist cosmopolitan moment: Reconfiguring Nussbaum’s cosmopolitan concentric circles. The Journal of Speculative Philosophy (new series), 21(3), 151–165.

    Article  Google Scholar 

  • Friedman, L. M. (2005). A history of American law. New York, NY: Touchstone.

    Google Scholar 

  • Galliott, J. (2015). Military robots: Mapping the moral landscape. Farnham: Ashgate.

    Google Scholar 

  • Greene, J. D., et al. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371.

    Article  Google Scholar 

  • Greenemeier, L. (Ed.). (2010). 12 Events that will change the world. Scientific American, 302(6), 36–50.

  • Habermas, J. (1995). Reconciliation through the public use of reason: Remarks on John Rawls’s political liberalism. The Journal of Philosophy, 92(3), 109–131.

    Google Scholar 

  • Hartmann, T. (2010). Unequal protection: How corporations became “people”—And how you can fight back. San Francisco, CA: Berrett-Koehler Publishers Inc.

    Google Scholar 

  • Himma, K. (2009). Artificial agency, consciousness and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.

    Article  Google Scholar 

  • Hohfeld, W. N. (1917). Fundamental legal conceptions as applied in judicial reasoning. Faculty Scholarship Series. Paper 4378. http://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=5383&context=fss_papers. Accessed 10 Sept 2016.

  • Hubbard, P. E. (2011). “Do androids dream?”: Personhood and intelligent artifacts. Temple Law Review, 83, 404–474.

    Google Scholar 

  • Hughes, J. (2004). Citizen cyborg. Cambridge, MA: Westview.

    Google Scholar 

  • Illes, J. (2005). Neuroethics: Defining the issues in theory, practice and policy. New York: Oxford University Press.

    Google Scholar 

  • Jones, P. (1994). Rights. London: Macmillan.

    Book  Google Scholar 

  • Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Farnham: Ashgate.

    Google Scholar 

  • Laukyte, M. (2012). Artificial and autonomous: A person? In G. Dodig-Crnkovic, A. Rotolo, et al. (Eds.), Social computing, social cognition, social networks and multiagent systems social turn (SNAMAS 2012) (pp. 73–78). Birmingham: AISB.

    Google Scholar 

  • Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Roboethics: The ethical and social implications of robotics. Cambridge, MA: The MIT Press.

    Google Scholar 

  • List, C., & Pettit, P. (2008). Group agency and superve nience. In J. Hohwy, J. Kallestrup (Eds.), Being reduced: New essays on reduction, explanation, and causation (pp. 75–92). New York: Oxford University Press.

  • List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Naess, A. (2010). The ecology of wisdom: Writings by Arne Naess. Berkeley: Counterpoint.

    Google Scholar 

  • Nussbaum, M. (1994). The therapy of desire: Theory and practice in hellenistic ethics. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Nussbaum, M. (1997). Cultivating humanity: A classical defense of reform in liberal education. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Nussbaum, M. C. (2006). Frontiers of justice: Disability, nationality, species membership. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Cambridge and London: The Belknap Press of Harvard University Press.

    Book  Google Scholar 

  • Pettit, P. (2001). A theory of freedom: From the psychology to the politics of agency. Cambridge: Polity Press.

    Google Scholar 

  • Pettit, P. (2007). Responsibility incorporated. Ethics, 117, 171–201.

    Article  Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge, MA: The Belknap Press of Harvard University Press.

    Google Scholar 

  • Rothblatt, M. (2014). Virtually human: The promise—And the Peril—Of digital immortality. New York: St. Martin’s Press.

    Google Scholar 

  • Sandbach, F. H. (1989). The stoics (2nd ed.). Cambridge: Hackett.

    Google Scholar 

  • Singer, P. S. (2009). Wired for war: The robotics revolution and conflict in the 21st century. London: Penguin Books.

    Google Scholar 

  • Singer, A. E. (2013). Corporate moral agency and artificial intelligence. International Journal of Social and Organizational Dynamics in IT, 3(1), 1–13.

    Article  Google Scholar 

  • Solum, L. B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231–1287.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Sylvan, K. L. (2012). How to become a redundant realist? Episteme, 9(3), 271–282.

    Article  Google Scholar 

  • Trustees of Dartmouth College v. Woodward. 17 U.S. 518 (1819). http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=17&invol=518. Accessed 29 Oct 2014.

  • Tuomela, R. (1984). A theory of social action. Dordrecht: Kluwer.

    Book  MATH  Google Scholar 

  • Tuomela, R. (2011). Review of Christian list and Philip Pettit Group agency: The possibility, design and status of corporate agents. Notre Dame Philosophical Reviews. http://ndpr.nd.edu/news/27604-group-agency-the-possibility-design-and-status-of-corporate-agents/. Accessed 11 Nov 2015.

  • Westra, L. (2013). The supranational corporation: Beyond the multinationals. Leiden: Brill.

    Book  Google Scholar 

  • Woolridge, M. (2009). An introduction to multiagent systems (2nd ed.). Chichester: Willey.

    Google Scholar 

Download references

Acknowledgments

I would like to thank Filippo Valente for copyediting this article and making a few helpful suggestions along the way. I would also like to thank the anonymous reviewers pointing out several ways in which the argument could be improved.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Migle Laukyte.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Laukyte, M. Artificial agents among us: Should we recognize them as agents proper?. Ethics Inf Technol 19, 1–17 (2017). https://doi.org/10.1007/s10676-016-9411-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-016-9411-3

Keywords

Navigation