skip to main content
10.1145/3478384.3478407acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article

The development of a dance-musification model with the use of machine learning techniques under COVID-19 restrictions

Published:15 October 2021Publication History

ABSTRACT

Interactive technologies enable dancers to control the music in real-time with their movement. This paper presents the design and development of a model which takes as input a dancer’s movement and outputs music, structurally related to dance, with the use of machine learning techniques. Both the technical and artistic aspects of the model development are described in detail. In particular, the paper compares the use of machine learning techniques to traditional coding, in interactive dance and music applications. Moreover, it describes the significant discrimination between movement sonification and dance musification and explains why the model presented here falls into the second category. Special focus is given to the implications of the COVID-19 restrictions regarding the established collaboration with the dancer.

References

  1. Martin Anthony and Peter L Bartlett. 2009. Neural network learning: Theoretical foundations. cambridge university press.Google ScholarGoogle Scholar
  2. Ryan Aylward and Joseph A Paradiso. 2006. Sensemble: a wireless, compact, multi-user sensor system for interactive dance. In Proceedings of the 2006 conference on New interfaces for musical expression. Citeseer, 134–139. https://bit.ly/3ePJZuoGoogle ScholarGoogle Scholar
  3. Konstantinos Bakogiannis and George Cambourakis. 2017. Semiotics and memetics in algorithmic music composition. Technoetic Arts 15, 2 (2017), 151–161. https://doi.org/10.1386/tear.15.2.151_1Google ScholarGoogle ScholarCross RefCross Ref
  4. Christopher M Bishop. 2006. Pattern recognition and machine learning. springer.Google ScholarGoogle Scholar
  5. Antonio Camurri. 1995. Interactive dance/music systems. In Proc. of Intl. Computer Music Conference (ICMC’95), Banff. 245–252. http://hdl.handle.net/2027/spo.bbp2372.1995.075Google ScholarGoogle Scholar
  6. Antonio Camurri, Giovanni De Poli, Anders Friberg, Marc Leman, and Gualtiero Volpe. 2005. The MEGA project: Analysis and synthesis of multisensory expressive gesture in performing art applications. Journal of New Music Research 34, 1 (2005), 5–21. https://doi.org/10.1080/09298210500123895Google ScholarGoogle ScholarCross RefCross Ref
  7. Antonio Camurri, Giovanni De Poli, Marc Leman, and Gualtiero Volpe. 2001. A multi-layered conceptual framework for expressive gesture applications. In Proc. intl MOSART workshop, Barcelona. Citeseer, 1–6.Google ScholarGoogle Scholar
  8. Antonio Camurri and Gualtiero Volpe. 2011. Multimodal analysis of expressive gesture in music performance. In Musical robots and interactive multimodal systems, Jorge Solis and Kia Nq (Eds.). Springer, 47–66. https://doi.org/10.1007/978-3-642-22291-7_4Google ScholarGoogle Scholar
  9. Kameron R Christopher, Jingyin He, Raakhi Kapur, and Ajay Kapur. 2013. Kontrol: Hand Gesture Recognition for Music and Dance Interaction.. In NIME, Vol. 13. 267–270. https://doi.org/10.5281/zenodo.1178496Google ScholarGoogle Scholar
  10. Balandino Di Donato, Jamie Bullock, and Atau Tanaka. 2018. Myo Mapper: a Myo armband to OSC mapper. In Proceedings of the International Conference on New Interfaces for Musical Expression. 138–143. https://doi.org/10.5281/zenodo.1302705Google ScholarGoogle Scholar
  11. Florian Grond and Jonathan Berger. 2011. Parameter mapping sonification. In The sonification handbook, Thomas Hermann, Andy Hunt, and John G. Neuhoff (Eds.). Chapter 15, 363–397. https://bit.ly/3tK9Ho1Google ScholarGoogle Scholar
  12. Leontios J Hadjileontiadis. 2014. Conceptual blending in biomusic composition space: The “brainswarm” paradigm. In ICMC. 621–628. https://bit.ly/3ynMs70Google ScholarGoogle Scholar
  13. Ian Hattwick, Joseph W Malloch, and Marcelo M Wanderley. 2014. Forming Shapes to Bodies: Design for Manufacturing in the Prosthetic Instruments.. In NIME. 443–448. https://bit.ly/3eKjJRTGoogle ScholarGoogle Scholar
  14. Tobias Hildebrandt, Juergen Mangler, and Stefanie Rinderle-Ma. 2014. Something doesn’t sound right: Sonification for monitoring business processes in manufacturing. In 2014 IEEE 16th Conference on Business Informatics, Vol. 2. IEEE, 174–182. https://doi.org/10.1109/CBI.2014.12Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jung In Jung. 2018. Bridging Abstract Sound and Dance Ideas with Technology: Interactive Dance Composition as Practice-Based Research. In International Conference on Live Interfaces. 160–172. https://bit.ly/3eOi9yzGoogle ScholarGoogle Scholar
  16. Jung I Jung. 2019. CHOREOGRAPHIC SOUND COMPOSITION: Towards a poetics of restriction. Ph.D. Dissertation. University of Huddersfield. https://bit.ly/2RYZdE3Google ScholarGoogle Scholar
  17. Anna Källblad, Anders Friberg, Karl Svensson, and Elisabet Sjöstedt Edelholm. 2008. Hoppsa Universum–An interactive dance installation for children. In New Interfaces for Musical Expression (NIME), Genova, 2008. 128–133. https://bit.ly/3w6mmDjGoogle ScholarGoogle Scholar
  18. Simon Katan. 2016. Using Interactive Machine Learning to Sonify Visually Impaired Dancers’ Movement. In Proceedings of the 3rd International Symposium on Movement and Computing. 1–4. https://doi.org/10.1145/2948910.2948960Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Chan Ji Kim. 2006. I. Composer and choreographer: A study of collaborative compositional process. II.“The Lotus Flower”, ballet music for chamber ensemble and two-channel audio. Ph.D. Dissertation. University of Florida. https://bit.ly/3oqDQrGGoogle ScholarGoogle Scholar
  20. Steven Landry and Myounghoon Jeon. 2020. Interactive sonification strategies for the motion and emotion of dance performances. Journal on Multimodal User Interfaces 14 (2020), 167–186. https://doi.org/10.1007/s12193-020-00321-3Google ScholarGoogle ScholarCross RefCross Ref
  21. Steven Landry, David Tascarella, Myounghoon Jeon, and S Maryam FakhrHosseini. 2016. Listen to your drive: Sonification architecture and strategies for driver state and performance. In Proceedings of the 22nd International Conference on Auditory Display (ICAD2016). International Community on Auditory Display, 1–3. https://doi.org/10.21785/icad2016.009Google ScholarGoogle ScholarCross RefCross Ref
  22. Fedor Lopukhov. 2002. Writings on ballet and music. Vol. 20. Univ of Wisconsin Press.Google ScholarGoogle Scholar
  23. Joseph Malloch. 2014. A Framework and Tools for Mapping of Digital Musical Instruments. Ph.D. Dissertation. McGill University (Canada). https://bit.ly/3bsAlvyGoogle ScholarGoogle Scholar
  24. Joseph Malloch, Ian Hattwick, and Marcelo Wanderley. 2013. Instrumented bodies: Prosthetic instruments for music and dance. J. Malloch. A Framework and Tools for Mapping of Digital Musical Instruments. Ph. D. Dissertation, McGill University, Montreal, Canada (2013).Google ScholarGoogle Scholar
  25. Paul H Mason. 2012. Music, dance and the total art work: choreomusicology in theory and practice. Research in dance education 13, 1 (2012), 5–24. https://doi.org/10.1080/14647893.2011.651116Google ScholarGoogle Scholar
  26. Joseph A Paradiso and Eric Hu. 1997. Expressive footwear for computer-augmented dance performance. In Digest of Papers. First International Symposium on Wearable Computers. IEEE, 165–166. https://doi.org/10.1109/ISWC.1997.629936Google ScholarGoogle ScholarCross RefCross Ref
  27. David Rokeby. 1995. Transforming Mirrors: Navigable Structures. In Critical Issues in Interactive Media, Simon Penny (Ed.). SUNY Press, 133–158. https://bit.ly/3tNSr1nGoogle ScholarGoogle Scholar
  28. Margaret Schedel, Phoenix Perry, and Rebecca Fiebrink. 2011. Wekinating 000000Swan: Using Machine Learning to Create and Control Complex Artistic Systems.. In NIME. Citeseer, 453–456. https://doi.org/10.5281/zenodo.1178151Google ScholarGoogle Scholar
  29. Julia H Schröder. 2017. Experimental relations between music and dance since the 1950s. In Music-Dance: Sound and Motion in Contemporary Discourse, Patrizia Veroliand Gianfranco Vinay (Eds.). Routledge, 141–156. https://doi.org/10.4324/9781315271996Google ScholarGoogle Scholar
  30. Benjamin Stahl and Balaji Thoshkahna. 2016. Design and evaluation of the effectiveness of a sonification technique for real time heart-rate data. Journal on Multimodal User Interfaces 10, 3 (2016), 207–219. https://doi.org/10.1007/s12193-016-0218-7Google ScholarGoogle ScholarCross RefCross Ref
  31. João Tragtenberg, Filipe Calegario, Giordano Cabral, and Geber L Ramalho. 2019. Towards the Concept of Digital Dance and Music Instruments.. In NIME’19. 89–94.Google ScholarGoogle Scholar
  32. Richard Vogl, Hamid Eghbal-Zadeh, Gerhard Widmer, and Peter Knees. 2018. GANs and Poses: An Interactive Generative Music Installation Controlled by Dance Moves. Interactive Machine-Learning for Music@ Exhibition at ISMIR. Paris, France (2018), 1–5. https://bit.ly/3tS03zMGoogle ScholarGoogle Scholar
  33. Duncan Alastair Hyatt Williams. 2016. Utility versus creativity in biomedical musification. Journal of Creative Music Systems 1 (2016), 1–13. Issue 1. https://doi.org/10.5920/JCMS.2016.02Google ScholarGoogle Scholar
  34. Min-Ling Zhang and Zhi-Hua Zhou. 2005. A k-nearest neighbor based algorithm for multi-label classification. In 2005 IEEE international conference on granular computing, Vol. 2. IEEE, 718–721. https://doi.org/10.1109/GRC.2005.1547385Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The development of a dance-musification model with the use of machine learning techniques under COVID-19 restrictions
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Other conferences
              AM '21: Proceedings of the 16th International Audio Mostly Conference
              September 2021
              283 pages
              ISBN:9781450385695
              DOI:10.1145/3478384

              Copyright © 2021 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 15 October 2021

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed limited

              Acceptance Rates

              Overall Acceptance Rate177of275submissions,64%
            • Article Metrics

              • Downloads (Last 12 months)13
              • Downloads (Last 6 weeks)0

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format