skip to main content
research-article

Context-Sensitive Affect Recognition for a Robotic Game Companion

Authors Info & Claims
Published:01 June 2014Publication History
Skip Abstract Section

Abstract

Social perception abilities are among the most important skills necessary for robots to engage humans in natural forms of interaction. Affect-sensitive robots are more likely to be able to establish and maintain believable interactions over extended periods of time. Nevertheless, the integration of affect recognition frameworks in real-time human-robot interaction scenarios is still underexplored. In this article, we propose and evaluate a context-sensitive affect recognition framework for a robotic game companion for children. The robot can automatically detect affective states experienced by children in an interactive chess game scenario. The affect recognition framework is based on the automatic extraction of task features and social interaction-based features. Vision-based indicators of the children’s nonverbal behaviour are merged with contextual features related to the game and the interaction and given as input to support vector machines to create a context-sensitive multimodal system for affect recognition. The affect recognition framework is fully integrated in an architecture for adaptive human-robot interaction. Experimental evaluation showed that children’s affect can be successfully predicted using a combination of behavioural and contextual data related to the game and the interaction with the robot. It was found that contextual data alone can be used to successfully predict a subset of affective dimensions, such as interest toward the robot. Experiments also showed that engagement with the robot can be predicted using information about the user’s valence, interest and anticipatory behaviour. These results provide evidence that social engagement can be modelled as a state consisting of affect and attention components in the context of the interaction.

References

  1. S. Afzal and P. Robinson. 2009. Natural affect data—collection and annotation in learning context. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction (ACII’09). IEEE, 22--28.Google ScholarGoogle Scholar
  2. N. Ambady and R. Rosenthal. 1992. “Thin slices” of expressive behaviors as predictors of interpersonal consequences. A meta analysis. Psychological Bulletin 111 (1992), 156--274.Google ScholarGoogle ScholarCross RefCross Ref
  3. L. M. Batrinca, N. Mana, B. Lepri, F. Pianesi, and N. Sebe. 2011. Please, tell me about yourself: Automatic personality assessment using short self-presentations. In Proceedings of the 13th International Conference on Multimodal Interaction (ICMI’11). ACM, New York, NY, 255--262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Breazeal. 2009. Role of expressive behaviour for robots that learn from people. Philosophical Transactions of the Royal Society B 364 (2009), 3527--3538.Google ScholarGoogle ScholarCross RefCross Ref
  5. C. Breazeal, A. Edsinger, P. Fitzpatrick, and B. Scassellati. 2001. Active vision for sociable robots. IEEE Transactions on Systems, Man and Cybernetics-Part A 31, 5 (2001), 443--453. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Castellano, R. Aylett, A. Paiva, and P. W. McOwan. 2008. Affect recognition for interactive companions. In Workshop on Affective Interaction in Natural Environments (AFFINE), ACM International Conference on Multimodal Interfaces (ICMI’08).Google ScholarGoogle Scholar
  7. G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. 2009a. It’s all in the game: Towards an affect sensitive and context aware game companion. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction (ACII’09). IEEE, 29--36.Google ScholarGoogle Scholar
  8. G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. 2010a. Affect recognition for interactive companions: Challenges and design in real-world scenarios. Journal on Multimodal User Interfaces 3, 1--2 (2010), 89--98.Google ScholarGoogle ScholarCross RefCross Ref
  9. G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. 2010b. Inter-ACT: An affective and contextually rich multimodal video corpus for studying interaction with robots. In Proceedings of the ACM International Conference on Multimedia. ACM, Florence, Italy, 1031--1034. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. 2012a. Detecting engagement in HRI: An exploration of social and task-based context. In Proceedings of the IEEE/ASE International Conference on Social Computing (SocialCom’12). IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. 2013. Multimodal affect modelling and recognition for empathic robot companions. International Journal of Humanoid Robotics 10, 1 (2013).Google ScholarGoogle ScholarCross RefCross Ref
  12. G. Castellano, M. Mancini, C. Peters, and P. W. McOwan. 2012b. Expressive copying behavior for social agents: A perceptual analysis. IEEE Transactions on Systems, Man and Cybernetics, Part A - Systems and Humans 42, 3 (2012), 776--783. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Castellano, A. Pereira, I. Leite, A. Paiva, and P. W. McOwan. 2009b. Detecting user engagement with a robot companion using task and social interaction-based features. In Proceedings of the International Conference on Multimodal Interfaces and Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI’09). ACM, New York, NY, 119--126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G. Castellano and C. Peters. 2010. Socially perceptive robots: Challenges and concerns. Interaction Studies 11, 2 (2010), 201--207.Google ScholarGoogle ScholarCross RefCross Ref
  15. G. Castellano, S. D. Villalba, and A. Camurri. 2007. Recognising human emotions from body movement and gesture dynamics. In Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction (ACII’07). (LNCS), A. Paiva, R. Prada, and R. W. Picard (Eds.), Vol. 4738. Springer-Verlag, Berlin, 71--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3 (2011), 27:1--27:27. Software available at: http://www.csie.ntu.edu.tw/cjlin/libsvm. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y.-W. Chen and C.-J. Lin. 2006. Combining SVMs with various feature selection strategies. In Feature Extraction, Foundations and Applications, I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh (Eds.). Springer.Google ScholarGoogle Scholar
  18. K. Dautenhahn. 2007. Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences 362, 1480 (2007), 679--704.Google ScholarGoogle ScholarCross RefCross Ref
  19. S. K. D’Mello, R. W. Picard, and A. C. Graesser. 2007. Towards an affect-sensitive autotutor. IEEE Intelligent Systems, Special Issue on Intelligent Educational Systems 22, 4 (2007), 53--61. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. el Kaliouby and P. Robinson. 2004. Real-time inference of complex mental states from facial expressions and head gestures. In Proceedings of the Workshop on Real Time Computer Vision for Human Computer Interaction, IEEE International Conference on Computer Vision and Pattern Recognition. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Raquel Ros Espinoza, Marco Nalin, Rachel Wood, Paul Baxter, Rosemarijn Looije, Yiannis Demiris, Tony Belpaeme, Alessio Giusti, and Clara Pozzi. 2011. Child-Robot interaction in the wild: Advice for the aspiring experimenter. In Proceedings of the 13th International Conference on Multimodal Interaction (ICMI’11). ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. F. Eyben, M. Woellmer, M. F. Valstar, H. Gunes, B. Schuller, and M. Pantic. 2011. String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (IEEE FG’11). IEEE, 322--329.Google ScholarGoogle Scholar
  23. David Feil-Seifer and Maja Mataric. 2011. Automated detection and classification of positive vs. negative robot interactions with children with autism using distance-based features. In Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11). ACM, New York, NY, 323--330. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. Fogel. 1993. Two principles of communication: Co-regulation and framing. In New Perspective in Early Communicative Development, J. Nadel and L. Camaioni (Eds.). Routledge, London, 9--22.Google ScholarGoogle Scholar
  25. H. Gunes and M. Pantic. 2010a. Automatic, dimensional and continuous emotion recognition. International Journal of Synthetic Emotions 1, 1 (2010), 68--99. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. H. Gunes and M. Pantic. 2010b. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In Proceedings of the International Conference on Intelligent Virtual Agents (IVA’10). Springer-Verlag, 371--377. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. H. Gunes and M. Piccardi. 2009. Automatic temporal segment detection and affect recognition from face and body display. IEEE Transactions on Systems, Man and Cybernetics - Part B 39, 1 (February 2009), 64--84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. H. Gunes, C. Shan, S. Chen, and Y. Tian. 2012. Bodily expression for automatic affect recognition. In Advances in Emotion Recognition, A. Konar and A. Chakraborty (Eds.). Wiley-Blackwell.Google ScholarGoogle Scholar
  29. Zakia Hammal and Jeffrey F. Cohn. 2012. Automatic detection of pain intensity. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI’12). ACM, New York, NY, 47--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. A. W. Harrist and R. M. Waugh. 2002. Dyadic synchrony: Its structure and function in children’s development. Developmental Review 22 (2002).Google ScholarGoogle Scholar
  31. M. Heerink, B. Krose, V. Evers, and B. Wielinga. 2008. The influence of social presence on acceptance of a companion robot by older people. Journal of Physical Agents 2, 2 (2008), 33--40.Google ScholarGoogle Scholar
  32. F. Hegel, T. Spexard, T. Vogt, G. Horstmann, and B. Wrede. 2006. Playing a different imitation game: Interaction with an Empathic Android Robot. In Proceedings of the 2006 IEEE-RAS International Conference on Humanoid Robots (Humanoids’06). IEEE, 56--61.Google ScholarGoogle Scholar
  33. C.-W. Hsu, C.-C. Chang, and C.-J Lin. 2010. A Practical Guide to Support Vector Classification.Google ScholarGoogle Scholar
  34. A. Kapoor, W. Burleson, and R. W. Picard. 2007. Automatic prediction of frustration. International Journal of Human-Computer Studies 65, 8 (2007), 724--736. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. Kapoor and R. W. Picard. 2005. Multimodal affect recognition in learning environments. In ACM International Conference on Multimedia. 677--682. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. C. D. Kidd and C. Breazeal. 2007. A robotic weight loss coach. In Proceedings of the 22nd Conference on Artificial Intelligence (AAAI’07). Vancouver, British Columbia, Canada. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Michael Kipp. 2008. Spatiotemporal coding in ANVIL. In Proceedings of the 6th International Language Resources and Evaluation (LREC’08).Google ScholarGoogle Scholar
  38. R. Kirby, J. Forlizzi, and R. Simmons. 2009. Affective social robots. Robotics and Autonomous Systems 48, 3 (2009), 322--332. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. Kleinsmith, N. Bianchi-Berthouze, and A. Steed. 2011. Automatic recognition of non-acted affective postures. IEEE Transactions on Systems, Man, and Cybernetics Part B 99 (January 2011), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. D. Kulic and E. A. Croft. 2007. Affective state estimation for human-robot interaction. IEEE Transactions on Robotics 23, 25 (2007), 991--1000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. I. Leite, G. Castellano, A. Pereira, C. Martinho, and A. Paiva. 2012a. Long-term interactions with empathic robots: Evaluating perceived support in children. In Proceedings of the International Conference on Social Robotics. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. I. Leite, G. Castellano, A. Pereira, C. Martinho, and A. Paiva. 2012b. Modelling empathic behaviour in a robotic game companion for children: An ethnographic study in real-world settings. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI’12). ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Iolanda Leite, Carlos Martinho, André Pereira, and Ana Paiva. 2008a. iCat: An affective game buddy based on anticipatory mechanisms. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’08). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1229--1232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. I. Leite, C. Martinho, A. Pereira, and A. Paiva. 2009. As time goes by: Long-term evaluation of social presence in robotic companions. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Toyama, Japan.Google ScholarGoogle Scholar
  45. I. Leite, A. Pereira, C. Martinho, and A. Paiva. 2008b. Are emotional robots more fun to play with? In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’08). 77--82. DOI: http://dx.doi.org/10.1109/ROMAN.2008.4600646Google ScholarGoogle Scholar
  46. C. Liu, K. Conn, N. Sarkar, and W. Stone. 2008. Online affect detection and robot behavior adaptation for intervention of children with autism. IEEE Transactions on Robotics 24, 4 (2008), 883--896. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Carlos Martinho and Ana Paiva. 2006. Using anticipation to create believable behaviour. In Proceedings of the American Association for Artificial Intelligence Technical Conference. 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Ross Mead, Amin Atrash, and Maja J. Mataric. 2011. Recognition of spatial dynamics for predicting social interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11). ACM, New York, NY, 201--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Marek P. Michalowski, S. Sabanovic, and Reid Simmons. 2006. A spatial model of engagement for a social robot. In Proceedings of the 9th IEEE International Workshop on Advanced Motion Control (AMC’06). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  50. Marek P. Michalowski, Reid Simmons, and Hideki Kozima. 2009. Rhythmic attention in child-robot dance play. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN’09). IEEE, 816--821.Google ScholarGoogle ScholarCross RefCross Ref
  51. E. Mower, D. J. Feil-Seifer, M. J. Mataric, and S. Narayanan. 2007. Investigating implicit cues for user state estimation in human-robot interaction using physiological measurements. In Proceedings of the 16th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN’07). IEEE, 1125--1130.Google ScholarGoogle Scholar
  52. Yukiko I. Nakano and Ryo Ishii. 2010. Estimating user’s engagement from eye-gaze behaviors in human-agent conversations. In Proceeding of the 14th International Conference on Intelligent User Interfaces (IUI’10). ACM, New York, NY, 139--148. DOI: http://dx.doi.org/10.1145/1719970.1719990 Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. M. A. Nicolaou, H. Gunes, and M. Pantic. 2011. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Transactions on Affective Computing 2, 2 (2011), 92--105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. M. Nielsen, G. Simcock, and L. Jenkins. 2008. The effect of social engagement on 24-month-olds imitation from live and televised models. Developmental Science 11, 5 (2008), 722--731.Google ScholarGoogle ScholarCross RefCross Ref
  55. Kevin O’Brien, Joel Sutherland, Charles Rich, and Candace L. Sidner. 2011. Collaboration with an autonomous humanoid robot: A little gesture goes a long way. In Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11). ACM, New York, NY, 215--216. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. C. Peters, S. Asteriadis, and K. Karpouzis. 2010. Investigating shared attention with a virtual agent using a gaze-based interface. Journal on Multimodal User Interfaces 3, 1--2 (2010), 119--130.Google ScholarGoogle ScholarCross RefCross Ref
  57. C. Peters, G. Castellano, and S. de Freitas. 2009. An exploration of user engagement in HCI. In Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots (AFFINE’09). ACM, New York, NY, 1--3. DOI: http://dx.doi.org/10.1145/1655260.1655269 Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. I. Poggi. 2007. Mind, Hands, Face and Body. A Goal and Belief View of Multimodal Communication. Weidler, Berlin.Google ScholarGoogle Scholar
  59. Charles Rich, Brett Ponsler, Aaron Holroyd, and Candace L. Sidner. 2010. Recognizing engagement in human-robot interaction. In Proceeding of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI’10). ACM, New York, NY, 375--382. DOI: http://dx.doi.org/10.1145/1734454.1734580 Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. L. D. Riek, P. Paul, and P. Robinson. 2010. When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry. Journal on Multimodal User Interfaces 3, 1 (2010), 99--108.Google ScholarGoogle ScholarCross RefCross Ref
  61. I. Roseman and C. Smith. 2001. Appraisal theory. In Appraisal Processes in Emotion: Theory, Methods, Research, Series in Affective Science, K. Scherer, A. Schorr, and T. Johnstone (Eds.). Oxford University Press.Google ScholarGoogle Scholar
  62. J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39 (1980), 1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  63. M. Saerbeck, T. Schut, C. Bartneck, and M. Janse. 2010. Expressive robots in education—Varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the 28th ACM Conference on Human Factors in Computing Systems (CHI’10). ACM, 1613--1622. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. J. Sanghvi, G. Castellano, I. Leite, A. Pereira, P. W. McOwan, and A. Paiva. 2011. Automatic analysis of affective postures and body motion to detect engagement with a game companion. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. B. Scassellati, H. Admoni, and M. Mataric. 2012. Robots for use in autism research. Annual Review of Biomedical Engineering 14 (2012), 275--294.Google ScholarGoogle ScholarCross RefCross Ref
  66. M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wollmer. 2012. Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing 3, 2 (2012), 165--183. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. B. Schuller. 2011. Recognizing affect from linguistic information in 3D continuous space. IEEE Transactions on Affective Computing 2, 4 (2011), 192--205. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. D. J. Shernoff, M. Csikszentmihalyi, B. Schneider, and E. S. Shernoff. 2003. Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly 18 (2003), 158--176.Google ScholarGoogle ScholarCross RefCross Ref
  69. Candace L. Sidner, Cory D. Kidd, Christopher H. Lee, and Neal B. Lesh. 2004. Where to look: A study of human-robot engagement. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI’04). ACM, New York, NY, 78--84. DOI: http://dx.doi.org/10.1145/964442.964458 Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic. 2011. A multi-modal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2011), 42--55. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. D. Szafir and B. Mutlu. 2012. Pay attention! designing adaptive agents that monitor and improve user engagement. In Proceedings of the 30th ACM/SigCHI Conference on Human Factors in Computing (CHI’12). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. F. Tanaka, A. Cicourel, and J. R. Movellan. 2007. Socialization between toddlers and robots at an early childhood education center. Proceedings of the National Academy of Science 194, 46 (2007), 17954--17958.Google ScholarGoogle ScholarCross RefCross Ref
  73. A. Tapus, M. J. Mataric, and B. Scassellati. 2007. The grand challenges in socially assistive robotics. IEEE Robotics and Automation Magazine (RAM), Special Issue on Grand Challenges in Robotics 14, 1 (2007).Google ScholarGoogle Scholar
  74. Albert van Breemen, Xue Yan, and Bernt Meerbeek. 2005. iCat: An animated user-interface robot with personality. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’05). ACM, New York, NY, 143--144. DOI: http://dx.doi.org/10.1145/1082473.1082823 Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Astrid M. von der Putten, Nicole C. Kramer, and Sabrina C. Eimler. 2011. Living with a robot companion: Empirical study on the interaction with an artificial health advisor. In Proceedings of the 13th International Conference on Multimodal Interaction (ICMI’11). ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. F. Yamaoka, T. Kanda, H. Ishiguro, and N. Hagita. 2010. A model of proximity control for information-presenting robots. IEEE Transactions on Robotics 26, 1 (2010), 187--195. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1 (January 2009), 39--58. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Context-Sensitive Affect Recognition for a Robotic Game Companion

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Interactive Intelligent Systems
          ACM Transactions on Interactive Intelligent Systems  Volume 4, Issue 2
          July 2014
          101 pages
          ISSN:2160-6455
          EISSN:2160-6463
          DOI:10.1145/2638542
          Issue’s Table of Contents

          Copyright © 2014 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 June 2014
          • Accepted: 1 June 2009
          • Revised: 1 March 2009
          • Received: 1 February 2007
          Published in tiis Volume 4, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader