Investigating affect in algorithmic composition systemsWilliams, D., Kirke, A., Miranda, E. R., Roesch, E. ORCID: https://orcid.org/0000-0002-8913-4173, Daly, I. and Nasuto, S. (2015) Investigating affect in algorithmic composition systems. Psychology of Music, 43 (6). pp. 831-854. ISSN 0305-7356 Full text not archived in this repository. It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1177/0305735614543282 Abstract/SummaryThere has been a significant amount of work implementing systems for algorithmic composition with the intention of targeting specific emotional responses in the listener, but a full review of this work is not currently available. This gap creates a shared obstacle to those entering the field. Our aim is thus to give an overview of progress in the area of these affectively driven systems for algorithmic composition. Performative and transformative systems are included and differentiated where appropriate, highlighting the challenges these systems now face if they are to be adapted to, or have already incorporated, some form of affective control. Possible real-time applications for such systems, utilizing affectively driven algorithmic composition and biophysical sensing to monitor and induce affective states in the listener are suggested.
Bailes, F., & Dean, R. T. (2009). Listeners discern affective variation in computer-generated musical
sounds. Perception, 38(9), 1386.
Bent, I. (1987). Analysis. The new Grove handbooks in music series. Basingstoke, UK: Macmillan.
Berg, J., & Wingstedt, J. (2005). Relations between selected musical parameters and expressed emotions:
Extending the potential of computer entertainment. Proceedings of the 2005 ACM SIGCHI
International Conference on Advances in computer entertainment technology held in Valencia,
Spain, 15–17 June (pp. 164–171). Valencia, Spain: ACM.
Birchfield, D. (2003). Generative model for the creation of musical emotion, meaning, and form. In
Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence (pp. 99–104). ACM.
Bresin, R., & Friberg, A. (2000). Emotional coloring of computer-controlled music performances. Computer
Music Journal, 24(4), 44–63.
Bresin, R., Friberg, A., & Sundberg, J. (2002). Director musices: The KTH performance rules system.
Proceedings of SIGMUS, 46, 43–48.
Chanel, G., Kronegg, J., Grandjean, D., & Pun, T. (2006). Emotion assessment: Arousal evaluation using
EEG’s and peripheral physiological signals. In: Multimedia Content Representation, Classification and
Security, Lecture Notes in Computer Science, 4105 (pp. 530–537). Berlin Heidelberg: Springer.
Chih-Fang, H., & Yin-Jyun, L. (2011). A study of the integrated automated emotion music with the
motion gesture synthesis. In Computer Science and Automation Engineering (CSAE), 2011 IEEE
International Conference on (Vol. 3, pp. 267–272). IEEE.
Chung, J., & Vercoe, G. S. (2006). The affective remixer: Personalized music arranging. In CHI’06 extended
abstracts on Human factors in computing systems (pp. 393–398). Montréal, Québec, Canada: ACM.
Collins, N. (2009). Musical form and algorithmic composition. Contemporary Music Review, 28(1),
103–114.
Collins, S. C. (1989). Subjective and autonomic responses to Western classical music (Doctoral dissertation).
University of Manchester.
Cope, D. (1989). Experiments in musical intelligence (EMI): Non-linear linguistic-based composition.
Journal of New Music Research, 18(1–2), 117–139.
Cope, D. (1992). Computer modeling of musical intelligence in EMI. Computer Music Journal, 16(2),
69–83.
Cross, I. (2005). Music and meaning, ambiguity and evolution. In Proceedings of the 8th International
Conference on Music Perception & Cognition. Evanston, Illinois: Society for Music Perception &
Cognition.
Crowder, R. G. (1984). Perception of the major/minor distinction: I. Historical and theoretical foundations.
Psychomusicology: A Journal of Research in Music Cognition, 4(1–2), 3–12.
Dahlstedt, P. (2007). Autonomous evolution of complete piano pieces and performances. In Proceedings
of Music AL Workshop.
Delgado, M., Fajardo, W., & Molina-Solana, M. (2009). Inmamusys: Intelligent multiagent music system.
Expert Systems with Applications, 36(3), 4574–4580.
Doppler, J., Rubisch, J., Jaksche, M., & Raffaseder, H. (2011). RaPScoM: towards composition strategies in
a rapid score music prototyping framework. In Proceedings of the 6th Audio Mostly Conference: A
Conference on Interaction with Sound (pp. 8–14). ACM.
Dubnov, S., McAdams, S., & Reynolds, R. (2006). Structural and affective aspects of music from statistical
audio signal analysis. Journal of the American Society for Information Science and Technology, 57(11),
1526–1536.
Dzuris, L., & Peterson, J. (2003). Data Abstraction In Emotionally Tagged Models for Compositional
Design in Music Running Title: Emotional Musical Data Abstraction
Eaton, J., & Miranda, E. (2013). Real-time notation using brainwave control. In Proceedings of the Sound
and Music Computing Conference, SMC 2013, 30th July – 30th August, KTH Royal Institute of
Technolgy, Stockholm, Sweden.
Eerola, T., & Vuoskoski, J. K. (2010). A comparison of the discrete and dimensional models of emotion in
music. Psychology of Music, 39(1), 18–49.
Eladhari, M., Nieuwdorp, R., & Fridenfalk, M. (2006). The soundtrack of your mind: mind music-adaptive
audio for game characters. In Proceedings of the 2006 ACM SIGCHI international conference on
Advances in computer entertainment technology (p. 54). ACM.
Eng, K., Klein, D., Babler, A., Bernardet, U., Blanchard, M., Costa, M., … & Verschure, P. F. (2003).
Design for a brain revisited: the neuromorphic design and functionality of the interactive space‘Ada’.
Reviews in the Neurosciences, 14(1–2), 145–180.
Fontaine, J. R. J., Scherer, K. R., Roesch, E. B., & Ellsworth, P. C. (2007). The world of emotions is not twodimensional.
Psychological Science, 18(12), 1050–1057.
Friberg, A., Bresin, R., & Sundberg, J. (2006). Overview of the KTH rule system for musical performance.
Advances in Cognitive Psychology, 2(2–3), 145–161.
Friberg, A., Colombo, V., Frydén, L., & Sundberg, J. (2000). Generating musical performances with
Director Musices. Computer Music Journal, 24(3), 23–29.
Gabrielsson, A. (2001a). Emotion perceived and emotion felt: Same or different? Musicae Scientiae, Specal
Issue, 2001–2002, 123–147.
Gabrielsson, A. (2001b). Emotions in strong experiences with music. In P. N. Juslin & J. A. Sloboda (Eds),
Music and emotion: Theory and research (pp. 431–449). New York, NY: Oxford University Press.
Gabrielsson, A. (2003). Music performance research at the millennium. Psychology of Music, 31(3),
221–272.
Gabrielsson, A., & Juslin, P. N. (1996). Emotional expression in music performance: Between the performer’s
intention and the listener’s experience. Psychology of Music, 24(1), 68–91.
Gabrielsson, A., & Juslin, P. N. (2003). Emotional expression in music. In R. J. Davidson, K. R. Scherer &
H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 503–534). New York, NY: Oxford University
Press.
Gabrielsson, A., & Lindström, E. (2001). The influence of musical structure on emotional expression. In
P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 223–248). New York,
NY: Oxford University Press.
Gartland-Jones, A. (2003). MusicBlox: A real-time algorithmic composition system incorporating a
distributed interactive genetic algorithm. In Applications of Evolutionary Computing, Lecture notes in
Computer Science, 2611 (pp. 490–501). Berlin Heidelberg: Springer.
Gerardi, G. M., & Gerken, L. (1995). The development of affective responses to modality and melodic contour.
Music Perception, 12(3), 279–290.
Gundlach, R. H. (1935). Factors determining the characterization of musical phrases. The American
Journal of Psychology, 47(4), 624–643.
Hanninen, D. A. (2012). A theory of music analysis: On segmentation and associative organization. Rochester,
NY: University of Rochester Press.
Harley, J. (1995). Generative processes in algorithmic composition: Chaos and music. Leonardo, 28(3),
221–224.
Heinlein, C. P. (1928). The affective characters of the major and minor modes in music. Journal of
Comparative Psychology, 8(2), 101–142.
Hevner, K. (1935). Expression in music: A discussion of experimental studies and theories. Psychological
Review, 42(2), 186–204.
Hevner, K. (1936). Experimental studies of the elements of expression in music. The American Journal of
Psychology, 48(2), 246–268.
Hevner, K. (1937). The affective value of pitch and tempo in music. The American Journal of Psychology,
49(4), 621–630.
Hiller, L., & Isaacson, L. M. (1957). Illiac suite, for string quartet (Vol. 30, No. 3). New Music Edition. New
York: Carl Fischer LLC.
Hoeberechts, M., Demopoulos, R. J., & Katchabaw, M. (2007). A flexible music composition engine. Audio
Mostly.
Hoeberechts, M., & Shantz, J. (2009). Realtime Emotional Adaptation in Automated Composition. Audio
Mostly, 1–8.
Horner, A., & Goldberg, D. (1991). Genetic algorithms and computer-assisted music composition. Urbana,
51(61801), 337–431.
Huang, C. F. (2011). A Novel Automated Way to Generate Content-based Background Music Using
Algorithmic Composition. International Journal of Sound, Music and Technology (IJSMT), 1(1).
Jacob, B. (1995). Composing with genetic algorithms. In Proceedings of the International Computer Music
Conference, ICMC September. Banff Alberta.
Jiang, M., & Zhou, C. (2010). Automated composition system based on GA. In Intelligent Systems and
Knowledge Engineering (ISKE), 2010 International Conference on (pp. 380-383). IEEE.
Juslin, P. N. (1997). Perceived emotional expression in synthesized performances of a short melody:
Capturing the listener’s judgment policy. Musicae Scientiae, 1(2), 225–256.
Juslin, P. N., Friberg, A., & Bresin, R. (2002). Toward a computational model of expression in music performance:
The GERM model. Musicae Scientiae, 5(1 suppl), 63–122.
Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review
and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238.
Juslin, P. N., & Lindström, E. (2010). Musical expression of emotions: modelling listeners’ judgements of
composed and performed features. Music Analysis, 29(1–3), 334–364.
Juslin, P. N., & Sloboda, J. A. (2010). Handbook of music and emotion : theory, research, applications. Oxford,
UK: Oxford University Press.
Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying
mechanisms. Behavioral and Brain Sciences, 31(5), 559–575; discussion 575–621. doi:10.1017/
S0140525X08005293
Kallinen, K., & Ravaja, N. (2006). Emotion perceived and emotion felt: Same and different. Musicae
Scientiae, 10(2), 191–213.
Kim, S., & André, E. (2004). Composing affective music with a generate and sense approach. Proc. of Flairs
2004. Retrieved from http://www.aaai.org/Papers/FLAIRS/2004/Flairs04–011.pdf
Kirke, A., & Miranda, E. (2011a). Combining EEG frontal asymmetery studies with affective algorithmic
composition and expressive performance models. Ann Arbor, MI: MPublishing, University of Michigan
Library.
Kirke, A., & Miranda, E. (2011b). Emergent construction of melodic pitch and hierarchy through agents communicating
emotion without melodic intelligence. Ann Arbor, MI: MPublishing, University of Michigan
Library.
Kirke, A., Miranda, E. R., & Nasuto, S. (2012). Learning to make feelings: Expressive performance as a
part of a machine learning tool for sound-based emotion therapy and control. In Cross-Disciplinary
Perspectives on Expressive Performance Workshop: Proceedings of the 9th International Symposium
on Computer Music Modeling and Retrieval, London, UK, 19–22 June. Queen Mary University of
London, UK.
Lamont, A., & Eerola, T. (2011). Music and emotion: Themes and development. Musicae Scientiae, 15(2),
139–145.
Le Groux, S., & Verschure, P. F. M. J. (2010). Towards adaptive music generation by reinforcement learning
of musical tension. Proceedings of the 7th Sound and Music Computing Conference, Barcelona,
Spain, 21–24 July. Universitat Pompeu Fabra, Barcelona, Spain.
Legaspi, R., Hashimoto, Y., Moriyama, K., Kurihara, S., & Numao, M. (2007). Music compositional intelligence
with an affective flavor. In Proceedings of the 12th international conference on Intelligent user
interfaces (pp. 216–224). Retrieved from http://dl.acm.org/citation.cfm?id=1216335
Levi, D. S. (1978). Expressive qualities in music perception and music education. Journal of Research in
Music Education, 26(4), 425–435.
Levi, D. S. (1979). Melodic expression, melodic structure, and emotion (Doctoral dissertation). New School
for Social Research.
Lin, Y.-P., Wang, C.-H., Jung, T.-P., Wu, T.-L., Jeng, S.-K., Duann, J.-R., & Chen, J.-H. (2010). EEGbased
emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7),
1798–1806.
Livingstone, S. R., & Brown, A. R. (2005). Dynamic response: real-time adaptation for music emotion.
In Proceedings of the second Australasian conference on Interactive entertainment (pp. 105-111).
Creativity & Cognition Studios Press.
Livingstone, S. R., Mühlberger, R., Brown, A. R., & Loch, A. (2007). Controlling musical emotionality:
An affective computational architecture for influencing musical emotions. Digital Creativity, 18(1),
43–53.
Livingstone, S. R., Muhlberger, R., Brown, A. R., & Thompson, W. F. (2010). Changing musical emotion:
A computational rule system for modifying score and performance. Computer Music Journal, 34(1),
41–64.
Manzolli, J., & Verschure, P. F. M.J. (2005). Roboser: A real-world composition system. Computer Music
Journal, 29(3), 55–74.
Mattek, A. (2011). Emotional communication in computer generated music: Experimenting with affective
algorithms. In Proceedings of the 26th Annual Conference of the Society for Electro-Acoustic Music in
the United States. Miami, Florida: University of Miami Frost School of Music.
Miranda, E., & Brouse, A. (2005). Toward direct brain-computer musical interfaces. In Proceedings of the
2005 conference on New interfaces for musical expression (pp. 216–219). Retrieved from http://dl.acm.
org/citation.cfm?id=1086000
Miranda, E. R. (2001). Composing music with computers (1st ed.). Boston, MA: Focal Press.
Miranda, E. R., Magee, W. L., Wilson, J. J., Eaton, J., & Palaniappan, R. (2011). Brain-computer music
interfacing (BCMI) from basic research to the real world of special needs. Music and Medicine, 3(3),
134–140.
Miranda, E. R., Sharman, K., Kilborn, K., & Duncan, A. (2003). On harnessing the electroencephalogram for
the musical braincap. Computer Music Journal, 27(2), 80–102. doi:10.1162/014892603322022682
Moroni, A., Manzolli, J., Von Zuben, F., & Gudwin, R. (2000). Vox populi: An interactive evolutionary
system for algorithmic music composition. Leonardo Music Journal, 10, 49–54.
Nielzén, S., & Cesarec, Z. (1982a). Emotional experience of music as a function of musical structure.
Psychology of Music, 10(2), 7–17.
Nielzén, S., & Cesarec, Z. (1982b). Emotional experience of music by psychiatric patients compared with
normal subjects. Acta Psychiatrica Scandinavica, 65(6), 450–460. doi:10.1111/j.1600-0447.1982.
tb00868.x
Nielzén, S., & Cesarec, Z. (1982c). The effect of mental illness on the emotional experience of music.
European Archives of Psychiatry and Clinical Neuroscience, 231(6), 527–538.
Nierhaus, G. (2009). Algorithmic composition paradigms of automated music generation. New York, NY:
Springer Publishing.
Numao, M., Takagi, S., & Nakamura, K. (2002a). CAUI Demonstration Composing Music Based on
Human Feelings. In AAAI/IAAI (pp. 1010–1012).
Numao, M., Takagi, S., & Nakamura, K. (2002b). Constructive adaptive user interfaces-composing music
based on human feelings. In PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL
INTELLIGENCE (pp. 193–198). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press;
1999.
Oliveira, A. P., & Cardoso, A. (2007). Towards affective-psychophysiological foundations for music production.
In Affective Computing and Intelligent Interaction, Lecture notes in Computer Science, 4738 (pp.
511–522). Berlin Heidelberg: Springer.
Oliveras Castro, V. (2009). Towards an emotion-driven interactive music system: ridging the gaps
between affect, physiology and sound generation.
Palmer, C. (1997). Music performance. Annual Review of Psychology, 48(1), 115–138.
Papadopoulos, G., & Wiggins, G. (1999). AI methods for algorithmic composition: A survey, a critical view
and future prospects. Proceedings of the AISB Symposium on Musical Creativity held in Edinburgh,
UK, 18 April (pp. 110–117). University of Edinburgh, UK.
Plans, D., & Morelli, D. (2012). Experience-driven procedural music generation for games. Computational
Intelligence and AI in Games, IEEE Transactions on, 4(3), 192–198.
Rebollo, R., Hans-Henning, P., & Skidmore, A. J. (1995). The effects of neurofeedback training with background
music on EEG patterns of ADD and ADHD children. International Journal of Arts Medicine, 4,
24–31.
Rigg, M. (1937). Musical expression: An investigation of the theories of Erich Sorantin. Journal of
Experimental Psychology, 21(4), 442–455.
Rigg, M. G. (1940). Speed as a determiner of musical mood. Journal of Experimental Psychology, 27(5),
566–571.
Rowe, R. (1992). Interactive music systems: Machine listening and composing. Cambridge, MA: MIT
press.
Rubisch, J., Doppler, J., & Raffaseder, H. (2011). RAPSCOM-A Framework For Rapid Prototyping Of
Semantically Enhanced Score Music. In Proceedings of Sound and Music Conference.
Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review,
110(1), 145–172.
Russell, J. A., & Barrett, L. F. (1999). Core affect, prototypical emotional episodes, and other things called
emotion: Dissecting the elephant. Journal of personality and social psychology, 76(5), 805–819.
Schachter, S., & Singer, J. (1962). Cognitive, social, and physiological determinants of emotional state.
Psychological Review, 69(5), 379–399.
Scherer, K. R. (1972). Acoustic concomitants of emotional dimensions: Judging affect from synthesized
tone sequences. Proceedings of the Eastern Psychological Association Meeting held in Boston,
Massachusetts, 19 April. Eastern Psychological Association Meeting, Boston, MA: Education
Resources Information Center.
Scherer, K. R. (1981). Speech and emotional states. In J. Darby (Ed.), Speech evaluation in psychiatry (pp.
189–220). New York, NY: Grune & Stratton.
Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms?
And how can we measure them? Journal of New Music Research, 33(3), 239–251.
Scherer, K. R., & Oshinsky, J. S. (1977). Cue utilization in emotion attribution from auditory stimuli.
Motivation and Emotion, 1(4), 331–346.
Scherer, K. R., Zentner, M. R., & Schacht, A. (2002). Emotional states generated by music: An exploratory
study of music experts. Musicae Scientiae, 6(1), 149–171.
Schmidt, L. A., & Trainor, L. J. (2001). Frontal brain electrical activity (EEG) distinguishes valence and
intensity of musical emotions. Cognition & Emotion, 15(4), 487–500.
Schubert, E. (1999). Measurement and time series analysis of emotion in music (Doctoral dissertation).
University of New South Wales.
Sorantin, E. (1932a). The problem of meaning in music. Nashville, TN: Marshall and Bruce Company.
Sorantin, E. (1932b). The problem of musical expression: A philosophical and psychological study. Nashville,
TN: Marshall & Bruce Company.
Stapleford, T. (1998). The harmony, melody, and form of herman, a real-time music generation system.
Master’s thesis, University of Edinburgh.
Strehl, U., Leins, U., Goth, G., Klinger, C., Hinterberger, T., & Birbaumer, N. (2006). Self-regulation of
slow cortical potentials: A new treatment for children with attention-deficit/hyperactivity disorder.
Pediatrics, 118(5), e1530–e1540.
Sugimoto, T., Legaspi, R., Ota, A., Moriyama, K., Kurihara, S., & Numao, M. (2008). Modelling affective-based
music compositional intelligence with the aid of ANS analyses. Knowledge-Based Systems,
21(3), 200–208.
Thompson, W. F., & Robitaille, B. (1992). Can composers express emotions through music? Empirical
Studies of the Arts, 10(1), 79–89.
Västfjäll, D. (2001). Emotion induction through music: A review of the musical mood induction procedure.
Musicae Scientiae, Special Issue, 2001–2002, 173–211.
Ventura, F., Oliveira, A., & Cardoso, A. (2009). An emotion-driven interactive system. In Proceedings of
the 14th Portuguese Conference on Artificial Intelligence, EPIA 2009. 12th - 15th October, Universidade
de Aveiro, Portugal.
Vercoe, G. S. (2006). Moodtrack: Practical methods for assembling emotion-driven music. Boston:
Massachusetts Institute of Technology.
Vuoskoski, J. K., & Eerola, T. (2011). Measuring music-induced emotion: A comparison of emotion models,
personality biases, and intensity of experiences. Musicae Scientiae, 15(2), 159–173.
Wallis, I., Ingalls, T., & Campana, E. (2008). Computer-generating emotional music: The design of an
affective music algorithm. Proceedings of the 11th International Conference on Digital Audio Effects
hekd in Espoo, Finland, 1–4 September (pp. 7–12). Helsinki University of Technology, Espoo, Finland.
Wallis, I., Ingalls, T., Campana, E., & Goodman, J. (2011). A rule-based generative music system controlled
by desired valence and arousal. Proceedings of the 8th Sound and Music Computing Conference held
in Padova, Italy, 6–9 July. University of Padova, Italy.
Wedin, L. (1969). Dimension analysis of emotional expression in music. Swedish Journal of Musicology,
51, 119–140.
Wedin, L. (1972). Multidimensional scaling of emotional expression in music. Swedish Journal of
Musicology, 54, 1–17.
Wiggins, G. A. (1999). Automated generation of musical harmony: what’s missing. In Proceedings of the
international joint conference in artifical intelligence (IJCAI99).
Williams, D., Kirke, A., Miranda, E. R., Roesch, E. B., & Nasuto, S. J. (2013). Towards affective algorithmic
composition. In G. Luck & O. Brabant (Eds.), Proceedings of the 3rd International Conference
on Music & Emotion (ICME3), Jyväskylä, Finland, 11–15 June. ISBN 978–951–39–5250–1.
Department of Music, University of Jyväskylä, Finland.
Wingstedt, J., Liljedahl, M., Lindberg, S., & Berg, J. (2005). REMUPP: An interactive tool for investigating
musical properties and relations. Proceedings of the 2005 conference on New interfaces for musical
expression held in Vancouver, Canada, 26–28 May (pp. 232–235). University of British Columbia,
Canada.
Winter, R. (2005). Interactive music: Compositional techniques for communicating different emotional
qualities. Unpublished masters dissertation, University of York.
Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music:
Characterization, classification, and measurement. Emotion, 8(4), 494–521.
Zentner, M. R., Meylan, S., & Scherer, K. R. (2000). Exploring musical emotions across five genres of
music. In Sixth International Conference of the Society for Music Perception and Cognition held in
Keele, UK, August (pp. 5–10). Keele University, UK.
Zhu, H., Wang, S., & Wang, Z. (2008). Emotional music generation using interactive genetic algorithm.
In Computer Science and Software Engineering, 2008 International Conference on (Vol. 1, pp.
345–348). IEEE. University Staff: Request a correction | Centaur Editors: Update this record |