One’s own soundtrack: affective music synthesisBadii, A., Khan, A. and D., F. (2009) One’s own soundtrack: affective music synthesis. In: European and Mediterranean conference on information systems 2009 (EMCIS2009), Izmir, Turkey. Full text not archived in this repository. It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. Abstract/SummaryComputer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.
Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |