According to Robert Rowe, interactive music systems can be classified according to:
- Interpretation: Score-driven vs. performance driven
- Response: Transformative, generative, or sequenced
- Paradigm: Instrument vs. player
You will find my original implementations under the corresponding categories, in the submenu of this page.
‘Score-driven program use predetermined event collections, or stored music fragments, to match against music arriving at the input. They are likely to organize events using the traditional categories of beat, meter, and tempo. Such categories allow the composer to preserve and employ familiar ways of thinking about temporal flow, such as specifying some events to occur on the downbeat of the next measure or at the end of every fourth bar.
Performance-driven programs do not anticipate the realization of any particular score. In other words, they do not have a stored representation of the music they expect to find at the input. Further, performance-driven programs tend not to employ traditional metric categories but often use more general parameters, involving perceptual measures such as density and regularity, to describe the temporal behavior of music coming in.
Transformative methods take some existing musical material and apply transformations to it to produce variants. According to the technique, these variants may or may not be recognizably related to the original. For transformative algorithms, the source material is complete musical input. This material need not be stored, however – often such transformations are applied to live input as it arrives.
For generative algorithms, on the other hand, what source material there is will be elementary or fragmentary – for example, stored scales or duration sets. Generative methods use sets of rules to produce complete musical output from the stored fundamental material, taking pitch structures from basic scalar patterns according to random distributions, for instance, or applying serial procedures to sets of allowed duration values.
Sequenced techniques use prerecorded music fragments in response to some realtime input. Some aspects of these fragments may be varied in performance, such as the tempo of playback, dynamic shape, slight rhythmic variations, etc.
Instrument paradigm systems are concerned with constructing an extended musical instrument: performance gestures from a human player are analyzed by the computer and guide an elaborated output exceeding normal instrumental response. Imagining such a system being played by a single performer, the musical result would be thought of as a solo.
Systems following a player paradigm try to construct an artificial player, a musical presence with a personality and behavior of its own, though it may vary in the degree to which it follows the lead of a human partner. A player paradigm system played by a single human would produce an output more like a duet.’
Rowe, R.: Interactive music systems: machine listening and composing. MIT Press Cambridge, MA, USA, 1992.