Skip to main content

Advertisement

Log in

Could Social Robots Make Us Kinder or Crueller to Humans and Animals?

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

The Montréal Declaration for Responsible Development of Artificial Intelligence states that emerging technologies ought not “encourage cruel behaviour towards robots that take on the appearance of human beings or animals and act in a similar fashion.” The idea of a causal link between cruelty and kindness to artificial and living beings, human or animal, is controversial and underexplored, despite its increasing relevance to robotics. Kate Darling recently marshalled Immanuel Kant’s argument—that cruelty to animals promotes cruelty to people—to argue for an analogous link concerning social robots. Others, such as Johnson and Verdicchio, have counter-argued that animal analogies are often flawed, partly because they ignore social robots’ true nature, including their lack of sentience. This, they say, weakens Darling’s argument that social robots will have virtue-promoting or vice-promoting effects regarding our treatment of living beings. Certain ideas in this debate, including those of anthropomorphism, projection, animal analogies, and Kant’s causal claim, require clarification and critical attention. Concentrating on robot animals, this paper examines strengths and weaknesses on both sides of this argument. It finds there is some reason for thinking that social robots may causally affect virtue, especially in terms of the moral development of children and responses to nonhuman animals. This conclusion has implications for future robot design and interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. We use autonomous [30] here to mean the capacity to self-initiate action and to exhibit agency (relatively) independently of human control, including acts of self-maintenance and self-preservation (cf. a laptop computer).

  2. Although Kant’s claim here relates to the development of virtue/vice, Kant himself was clearly not a virtue ethicist. Nor are the details of his deontological position particularly relevant for Darling (or us), other than what bears on his causal claim about the effects of cruelty/kindness to animals.

  3. Actually, some studies [53] suggest slaughter-workers have more human-directed aggression (and less empathy for animals). However, Johnson and Verdicchio may argue that these studies are not conclusive.

  4. Note that Johnson and Verdicchio do acknowledge that the carryover effects of violent films and the like remains an “unresolved issue” [15, p 299].

  5. It is a live, if unsettled, question whether routine slaughter-work causally predisposes to mistreatment of other animals and humans.

  6. It is important to stress that empirical studies alone (e.g. see Sect. 2) cannot resolve these conceptual questions.

  7. The word projection is itself perhaps redolent of an act involving misattributed qualities.

  8. This is, of course, contestable. Johnson and Verdicchio deny it: “Moral patients derive their moral status from their capacity to suffer and be harmed” [15, p 295]. So would many others. Some, however, deny that sentience is necessary for moral patiency (though they may not attribute intrinsic moral standing to robots or nonliving entities) [83].

References

  1. Montréal Declaration for Responsible Development of Artificial Intelligence (2016) https://www.montrealdeclaration-responsibleaicom/the-declaration

  2. Levy D (2009) The ethical treatment of artificially conscious robots International. Int J Soc Robot 1(3):209–216

    Google Scholar 

  3. Anderson M, Anderson SL (eds) (2011) Machine ethics. Cambridge University Press, Cambridge

    Google Scholar 

  4. Gunkel D (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge

    Google Scholar 

  5. Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  6. Sullins JP (2006) When is a robot a moral agent? Int Rev Inf Ethics 6(12):23–30

    Google Scholar 

  7. Sparrow R (2017) Robots, rape, and representation. Int J Soc Robot 9(4):465–477

    Google Scholar 

  8. Cappuccio ML, Peeters A, McDonald W (2019) Sympathy for Dolores: moral consideration for robots based on virtue and recognition. Philos Technol (online), 1–23. https://www.rdcube/braeT

  9. Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behaviour towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar, Cheltenham, pp 213–231

    Google Scholar 

  10. Anderson SL (2011) The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 285–296

    Google Scholar 

  11. Kant I ([1784–5]1997) Moral philosophy: Collin’s lecture notes. In: Heath P, Schneewind JB (eds and trans) Lectures on ethics (Cambridge edition of the works of Immanuel Kant) Cambridge University Press, Cambridge, pp 37–222

  12. Calo R (2015) Robotics and the lessons of cyberlaw. California Law Review 103:513–563

    Google Scholar 

  13. Coeckelbergh M (2018) Why care about robots? Empathy, moral standing, and the language of suffering. Kairos J Philos Sci 20(1):41–158

    Google Scholar 

  14. Black D (2019) Machines with faces: robot bodies and the problem of cruelty. Body Soc 25(2):3–27

    Google Scholar 

  15. Johnson DG, Verdicchio M (2018) Why robots should not be treated like animals. Ethics Inf Technol 20(4):291–301

    Google Scholar 

  16. Eyssel F, Kuchenbrandt D (2012) Social categorization of social robots: anthropomorphism as a function of robot group membership. Br J Soc Psychol 51(4):724–731

    Google Scholar 

  17. Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81

    Google Scholar 

  18. Hursthouse R (1999) On virtue ethics. Oxford University Press, Oxford

    Google Scholar 

  19. Swanton C (2003) Virtue ethics: a pluralistic view. Clarendon Press, Oxford

    Google Scholar 

  20. Ross WD (1930) The right and the good. Clarendon Press, Oxford

    Google Scholar 

  21. Aristotle (2003) The Nicomachean ethics. Penguin Classics, London

    Google Scholar 

  22. Singer P (1995) Animal liberation. Random House, London

    Google Scholar 

  23. Regan T (2004) The case for animal rights. University of California Press, Berkeley

    Google Scholar 

  24. Hursthouse R (2013) Ethics, humans and other animals: an introduction with readings. Routledge, London

    Google Scholar 

  25. Kahn PH, Friedman B, Perez-Granados DR, Freier NG (2006) Robotic pets in the lives of preschool children. Interact Stud 7(3):405–436

    Google Scholar 

  26. Dautenhahn K (2013) Human–robot interaction. In: Soegaard M, Dam RF (eds) The encyclopedia of human–computer interaction, 2nd edn. The Interaction Design Foundation, Aarhus

    Google Scholar 

  27. Mori M (1970) The uncanny valley. Energy 7(4):33–35

    Google Scholar 

  28. Coghlan S, Waycott J, Neve BB, Vetere F (2018) Using robot pets instead of companion animals for older people: a case of ‘reinventing the wheel’? In: Proceedings of the 30th Australian conference on computer–human interaction, pp 172–183

  29. Sandry E (2015) Re-evaluating the form and communication of social robots. Int J Soc Robot 7(3):335–346

    Google Scholar 

  30. Breazeal C (2003) Toward sociable robots. Robot Auton Syst 42(3–4):167–175

    MATH  Google Scholar 

  31. Melson GF, Kahn Jr PH, Beck AM, Friedman B, Roberts T, Garrett E (2005) Robots as dogs? Children’s interactions with the robotic dog AIBO and a live Australian shepherd. In: CHI’05 extended abstracts on human factors in computing systems. ACM, pp 1649–1652

  32. Friedman B, Kahn Jr PH, Hagman J (2003) Hardware companions? What online AIBO discussion forums reveal about the human–robotic relationship. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 273–280

  33. Reeves B, Nass CI (1996) The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge

    Google Scholar 

  34. Whitby B (2008) Sometimes it’s hard to be a robot: a call for action on the ethics of abusing artificial agents. Interact Comput 20:338–341

    Google Scholar 

  35. de Graaf MM (2016) An ethical evaluation of human–robot relationships. Int J Soc Robot 8(4):589–598

    Google Scholar 

  36. Turkle S (2017) Alone together: why we expect more from technology and less from each other. Hachette, New York

    Google Scholar 

  37. Hamill J (2017) Office mounts touching memorial for security robot that drowned itself. New York Post https://www.nypostcom/2017/07/20/office-mounts-touching-memorial-for-security-robot-that-drowned-itself/

  38. Every time Boston Dynamics has abused a robot (2017) YouTube. https://youtu.be/4PaTWufUqqU

  39. Sparrow R (2016) Kicking a robot dog. In: 2016 11th ACM/IEEE international conference on human–robot interaction, pp 229–229

  40. Seo SH, Geiskkovitch D, Nakane M, King C, Young, JE (2015) Poor thing! Would you feel sorry for a simulated robot? A comparison of empathy toward a physical and a simulated robot. In: 2015 10th ACM/IEEE international conference on human–robot interaction, pp 125–132

  41. Bartneck C, Hu J (2008) Exploring the abuse of robots. Interact Stud 9(3):415–433

    Google Scholar 

  42. Carpenter J (2016) Culture and human–robot interaction in militarized spaces: a war story. The Atlantic. https://www.theatlanticcom/technology/archive/2013/09/funerals-for-fallen-robots/279861/

  43. Nomura T, Kanda T, Kidokoro H, Suehiro Y, Yamada S (2016) Why do children abuse robots? Interact Stud 17(3):347–369

    Google Scholar 

  44. Dadds MR, Turner CM, McAloon J (2002) Developmental links between cruelty to animals and human violence. Aust N Z J Criminol 35(3):363–382

    Google Scholar 

  45. Coeckelbergh M (2011) Humans, animals, and robots: a phenomenological approach to human–robot relations. Int J Soc Robot 3(2):197–204

    Google Scholar 

  46. Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3/4):177

    MATH  Google Scholar 

  47. Damiano L, Dumouchel P (2018) Anthropomorphism in human–robot co-evolution. Front Psychol. https://doi.org/10.3389/fpsyg.2018.00468

    Article  Google Scholar 

  48. Hegel F, Krach S, Kircher T, Wrede B, Sagerer G (2008) Understanding social robots: a user study on anthropomorphism. In: RO-MAN 2008—the 17th IEEE international symposium on robot and human interactive communication, pp 574–579

  49. Anderson CA, Bushman BJ (2001) Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: a meta-analytic review of the scientific literature. Psychol Sci 12(5):353–359

    Google Scholar 

  50. Ferguson CJ (2015) Does movie or video game violence predict societal violence? It depends on what you look at and when. J Commun 65(1):193–212

    MathSciNet  Google Scholar 

  51. Calverley D (2006) Android science and animal rights, does an analogy exist? Connect Sci 18(4):403–417

    Google Scholar 

  52. Hogan K (2017) Is the machine question the same question as the animal question? Ethics Inf Technol 19:29–38

    Google Scholar 

  53. Richards E, Signal T, Taylor N (2013) A different cut? Comparing attitudes toward animals and propensity for aggression within two primary industry cohorts—farmers and meatworkers. Soc Anim 21(4):395–413

    Google Scholar 

  54. Johnson DG, Verdicchio M (2017) AI anxiety. J Assoc Inf Sci Technol 68(9):2267–2270

    Google Scholar 

  55. Dowsett A, Jackson M (2019) The effect of violence and competition within video games on aggression. Comput Hum Behav 99:22–27

    Google Scholar 

  56. Ferguson CJ, Colwell J (2018) A meaner, more callous digital world for youth? The relationship between violent digital games, motivation, bullying, and civic behavior among children. Psychol Pop Med Cult 7(3):202

    Google Scholar 

  57. Shibuya A, Sakamoto A, Ihori N, Yukawa S (2008) The effects of the presence and contexts of video game violence on children: a longitudinal study in Japan. Simul Gaming 39(4):528–539

    Google Scholar 

  58. Arluke A (2002) Animal abuse as dirty play. Symb Interact 25(4):405–430

    Google Scholar 

  59. Gullone E (2012) Animal cruelty, antisocial behaviour, and aggression: more than a link. Palgrave Macmillan, Basingstoke

    Google Scholar 

  60. Kant I ([1785](1998) Groundwork of the metaphysics of morals. Gregor MJ (trans). Cambridge University Press, Cambridge

  61. O’Neill O (1998) June) Kant on duties regarding nonrational nature. Aristot Soc Suppl 71(1):211–228

    Google Scholar 

  62. Rozuel C (2011) The moral threat of compartmentalization: self, roles and responsibility. J Bus Ethics 102(4):685–697

    Google Scholar 

  63. Monte CF (1997) Beneath the mask: an introduction to theories of personality, 6th edn. Harcourt Brace, Fort Worth

    Google Scholar 

  64. Fink J (2012). Anthropomorphism and human likeness in the design of robots and human–robot interaction. In: International conference on social robotics. Springer, Berlin, Heidelberg, pp 199–208

    Google Scholar 

  65. Airenti G (2015) The cognitive bases of anthropomorphism: from relatedness to empathy. Int J Soc Robot 7(1):117–127

    Google Scholar 

  66. Turkle S (2007) Authenticity in the age of digital companions. Interact Stud 8:501–517. https://doi.org/10.1075/is.8.3.11tur

    Article  Google Scholar 

  67. Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C (2015) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J Soc Robot 7(3):347–360

    Google Scholar 

  68. Melson GF, Kahn PH Jr, Beck A, Friedman B, Roberts T, Garrett E, Gill BT (2009) Children’s behavior toward and understanding of robotic and living dogs. J Appl Dev Psychol 30(2):92–102

    Google Scholar 

  69. Rodogno R (2016) Social robots, fiction, and sentimentality. Ethics Inf Technol 18(4):257–268

    Google Scholar 

  70. Lamarque P (1981) How can we fear and pity fictions? Br J Aesthet 21(4):291–304

    Google Scholar 

  71. Schneider S (2006) The paradox of fiction. The internet encyclopedia of philosophy. http://www.ieputmedu/f/fict-parhtm

  72. Sparrow R (2002) The march of the robot dogs. Ethics Inf Technol 4(4):305–318

    Google Scholar 

  73. Darling K, Nandy P, Breazeal C (2015) Empathic concern and the effect of stories in human-robot interaction. In: 2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 770–775

  74. Rosenthal-von der Pütten AM, Krämer NC, Hoffmann L, Sobieraj S, Eimler SC (2013) An experimental study on emotional reactions towards a robot. Int J Soc Robot 5(1):17–34

    Google Scholar 

  75. The Good Place (2018) Chapter 7: The eternal shriek. Netflix, Scotts Valley

    Google Scholar 

  76. Horstmann AC, Bock N, Linhuber E, Szczuka JM, Straßmann C, Krämer NC (2018) Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLoS ONE 13(7):e0201581

    Google Scholar 

  77. Damon W, Lerner RM, Eisenberg N (eds) (2006) Handbook of child psychology, social, emotional, and personality development, vol 3. Wiley, Hoboken

    Google Scholar 

  78. Vollmer AL, Read R, Trippas D, Belpaeme T (2018) Children conform, adults resist: a robot group induced peer pressure on normative social conformity. Sci Robot. https://doi.org/10.1126/scirobotics.aat7111

    Article  Google Scholar 

  79. Goodliff G, Canning N, Parry J, Miller L (eds) (2017) Young children’s play and creativity: multiple voices. Taylor & Francis, Abingdon

    Google Scholar 

  80. Carr D, Harrison T (2015) Educating character through stories. Imprint Academic, Exeter

    Google Scholar 

  81. Almerico GM (2014) Building character through literacy with children’s literature. Res Higher Educ J 26

  82. Coeckelbergh M (2010) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12(3):235–241

    Google Scholar 

  83. Taylor PW (2011) Respect for nature: a theory of environmental ethics. Princeton University Press, Princeton

    Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous referees for their very helpful comments.

Funding

Funding was provided by Australian Research Council (AU) (Grant No. FT170100420) and the Melbourne Networked Society Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Coghlan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Coghlan, S., Vetere, F., Waycott, J. et al. Could Social Robots Make Us Kinder or Crueller to Humans and Animals?. Int J of Soc Robotics 11, 741–751 (2019). https://doi.org/10.1007/s12369-019-00583-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-019-00583-2

Keywords

Navigation