Affordable Access

Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.

Authors
Type
Published Article
Journal
The Journal of the Acoustical Society of America
0001-4966
Publisher
Acoustical Society of America
Publication Date
Volume
117
Issue
1
Pages
391–399
Identifiers
PMID: 15704432
Source
Medline
License
Unknown

Abstract

Measurements of tempo and dynamics from audio files or MIDI data are frequently used to get insight into a performer's contribution to music. The measured variations in tempo and dynamics are often represented in different formats by different authors. Few systematic comparisons have been made between these representations. Moreover, it is unknown what data representation comes closest to subjective perception. The reported study tests the perceptual validity of existing data representations by comparing their ability to explain the subjective similarity between pairs of performances. In two experiments, 40 participants rated the similarity between performances of a Chopin prelude and a Mozart sonata. Models based on different representations of the tempo and dynamics of the performances were fitted to these similarity ratings. The results favor other data representations of performances than generally used, and imply that comparisons between performances are made perceptually in a different way than often assumed. For example, the best fit was obtained with models based on absolute tempo and absolute tempo times loudness, while conventional models based on normalized variations, or on correlations between tempo profiles and loudness profiles, did not explain the similarity ratings well.

Statistics

Seen <100 times