Accessibility navigation


One’s own soundtrack: affective music synthesis

Badii, A., Khan, A. and D., F. (2009) One’s own soundtrack: affective music synthesis. In: European and Mediterranean conference on information systems 2009 (EMCIS2009), Izmir, Turkey.

Full text not archived in this repository.

Abstract/Summary

Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.

Item Type:Conference or Workshop Item (Paper)
Divisions:Faculty of Science > School of Systems Engineering
ID Code:14572
Uncontrolled Keywords:Affective, Emotive, Music synthesis, Emotion, CALLAS.
Additional Information:Awarded as best practical paper
Publisher:Brunel University

Centaur Editors: Update this record

Page navigation