Skip to main content
Log in

Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • K. Coleman. Computing and Moral Responsibility. Stanford Encyclopedia of Philosophy. Available at http://plato.stanford.edu/entries/computing-responsibility/, 2004.

  • A. Eshleman. Moral Responsibility. Stanford Encyclopedia of Philosophy. Available at http://plato.stanford.edu/entries/moral-responsibility/, 2001.

  • L. Floridi. Information Ethics: On the Philosophical Foundation of Computer Ethics. Ethics and Information Technology, 1(1): 37–56, 1999.

  • L. Floridi and J. Sanders. Artificial Evil and the Foundation of Computer Ethics. Ethics and Information Technology, 3(1): 56–66, 2001.

  • K.E. Himma. What is a Problem for All is a Problem for None: Substance Dualism, Physicalism, and the Mind-body Problem. American Philosophical Quarterly, 42(2): 81–92, 2005.

    Google Scholar 

  • D. Johnson. Computer Systems: Moral Entities but not Moral Agents. Ethics and Information Technology, 8(4): 195–204, 2006.

  • F.W.J. Keulartz et al. Pragmatism in Progress. Techne: Journal of the Society for Philosophy and Technology, 7(3): 38–49, 2004.

    Google Scholar 

  • J. Kim. Reasons and the First Person. In J. Bransen and S. Cuypers, editors, Human Action, Deliberation, and Causation, pp. 67–87. Kluwer Publishers, Dordrecht, 1998.

  • B. Latour. On Technical Mediation – Philosophy, Sociology, Genealogy. Common Knowledge, 3: 29–64, 1994.

    Google Scholar 

  • K. Miller and D. Larson. Angels and Artifacts: Moral Agents in the Age of Computers and Networks. Journal of Information, Communication & Ethics in Society, 3(3): 113, 2005.

    Google Scholar 

  • J. Moor. Reason, Relativity, and Responsibility in Computer Ethics. Computers and Society, 28(1): 14–21, 1998.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenneth Einar Himma.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Himma, K.E. Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?. Ethics Inf Technol 11, 19–29 (2009). https://doi.org/10.1007/s10676-008-9167-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-008-9167-5

Keywords

Navigation