Abstract
The most intriguing and ethically challenging roles of robots in society are those of collaborator and social partner. We propose that such robots must have the capacity to learn, represent, activate, and apply social and moral norms—they must have a norm capacity. We offer a theoretical analysis of two parallel questions: what constitutes this norm capacity in humans and how might we implement it in robots? We propose that the human norm system has four properties: flexible learning despite a general logical format, structured representations, context-sensitive activation, and continuous updating. We explore two possible models that describe how norms are cognitively represented and activated in context-specific ways and draw implications for robotic architectures that would implement either model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This is a cognitive definition of a norm, and it allows for an agent to endorse an illusory norm—when all three conditions are met but community members do not in fact follow the instruction and do not in fact expect others to follow the instruction. If we want to model and predict the agent’s behavior, however, we can still consider the person to follow a perceived norm (Aarts and Dijksterhuis 2003).
- 2.
The threshold of sufficiency will typically be a majority but may vary by norm type and community.
References
Aarts H, Dijksterhuis A (2003) The silence of the library: environment, situational norm, and social behavior. J Pers Soc Psychol 84(1):18–28
Andrighetto G, Villatoro D, Conte R (2010) Norm internalization in artificial societies. AI Commun 23(4):325–339
Bicchieri C (2006) The grammar of society: the nature and dynamics of social norms. Cambridge University Press, New York
Bower GH (1970) Organizational factors in memory. Cogn Psychol 1(1):18–46. doi:10.1016/0010-0285(70)90003-4
Brennan G, Eriksson L, Goodin RE, Southwood N (2013) Explaining norms. Oxford University Press, New York
Cialdini RB, Kallgren CA, Reno RR (1991) A focus theory of normative conduct: a theoretical refinement and reevaluation of the role of norms in human behavior. In: Zanna MP (ed) Advances in experimental social psychology, vol 24. Academic Press, San Diego, CA, pp 201–234
Harvey MD, Enzle ME (1981) A cognitive model of social norms for understanding the transgression helping effect. J Pers Soc Psychol 41(5):866–875. doi:10.1037/0022-3514.41.5.866
Hechter M, Opp KD (eds) (2001) Social norms. Russell Sage Foundation, New York
Malle BF (2015) Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics Inf Technol [online first]. doi:10.1007/s10676-015-9367-8
Malle BF, Scheutz M (2014) Moral competence in social robots. In: IEEE international symposium on ethics in engineering, science, and technology. IEEE, Chicago, IL, pp 30–35
Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21. doi:10.1109/MIS.2006.80
Nourbakhsh IR (2013) Robot futures. MIT Press, Cambridge
Schank RC, Abelson RP (1977) Scripts, plans, goals, and understanding. Erlbaum, Hillsdale
Sullins JP (2011) Introduction: open questions in roboethics. Philoso Technol 24(3):233. doi:10.1007/s13347-011-0043-6
\(\breve{{{\rm S}}}\)abanović S (2010) Robots in society, society in robots. Int J Social Robot 2(4):439–450. doi:10.1007/s12369-010-0066-7
Ullmann-Margalit E (1977) The emergence of norms. Clarendon library of logic and philosophy. Clarendon Press, Oxford
Van Berkum JJA, Holleman B, Nieuwland M, Otten M, Murre J (2009) Right or wrong? the brains fast response to morally objectionable statements. Psychol Sci 20(9):1092–1099. doi:10.1111/j.1467-9280.2009.02411.x
Veruggio G, Solis J, Van der Loos M (2011) Roboethics: ethics applied to robotics. IEEE Robot Autom Mag 18(1):21–22. doi:10.1109/MRA.2010.940149
Acknowledgements
This work was in part funded by a grant from the Office of Naval Research (ONR), No. N00014-14-1-0144, and a grant from the Defense Advanced Research Projects Agency (DARPA), DARPA SIMPLEX No. 14-46-FP-097. The opinions expressed here are our own and do not necessarily reflect the views of ONR or DARPA.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Malle, B.F., Scheutz, M., Austerweil, J.L. (2017). Networks of Social and Moral Norms in Human and Robot Agents. In: Aldinhas Ferreira, M., Silva Sequeira, J., Tokhi, M., E. Kadar, E., Virk, G. (eds) A World with Robots. Intelligent Systems, Control and Automation: Science and Engineering, vol 84. Springer, Cham. https://doi.org/10.1007/978-3-319-46667-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-46667-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46665-1
Online ISBN: 978-3-319-46667-5
eBook Packages: EngineeringEngineering (R0)