Composition Language and Context Tree Models

The composition model in MPS represents musical pieces by means of a tree-based structure containing musical context information. Besides musical contexts, the model contains so called context modifiers, context generators and control structures. All aforementioned elements are explained in the following sections.

The model was developed in conjunction with a comprehensible domain-specific composition language, in which music can easily be notated. It is demonstrated in the following sections, how the composition language can be used to create composition models. The models can later be transformed to scores, lead sheets and a number of other representations, as explained in chapter Music Transformation and Visualization.

Introductory Example

As a first example, a model of Ludwig van Beethoven’s world-famous Symphony No. 5 in C Minor Op. 67 motif is presented. The score looks like this:

Consider the correspondent context model:

Each composition model defines a root node labeled composition. It contains a tree of context objects. In the previously shown model, the key of the composition (namely C minor) is defined by means of a tonalCenter context. Below that, a metric context (a 2/4 time signature) is defined. On the next layer, a rhythm is specified representing the famous rhythm of the motif, namely an eighth rest followed by three eighth notes and a half note. The syntax used to describe this rhythm is part of the composition language, which is also introduced in this chapter. Refer to section Rhythms for more details.

The pitches to be played are specified in terms of zero-based degrees on the minor scale, meaning that the number 0 represents the root note C, 1 the note D, 2 the note Eb and so on. Note that the context tree diverges below the scale node into two separate branches. This is interpreted as follows: First, all contexts between the composition and the scale node are combined with the left branch (namely the node pitches 4 4 4 2), and sequentially combined with the right branch (namely the node pitches 3 3 3 1 and the rhythmic extension).

Note that both combined context sets contain the same rhythm, but different pitches. This is a pattern which is very frequently used in musical compositions: the same musical context (in this case a rhythm) is combined with a set of other musical contexts of another type (in this case pitches). The left branch effectively represents the first two measures of the composition. In this case, the pitches evaluate three times to G and once to Eb. In the right branch, which represents the rest of the motif, the pitches evaluate three times to F and once to D. The D, however, is rhythmically different from the Eb in the second measure, for its duration is two half notes instead of only one. This is only a minor modification compared to the original rhythm. In the composition model, it is not required to define a new rhythm. Instead, only the modification of the current rhythm must be specified, which is done with a so called context modifier named rhythmic extension, which doubles the duration of the last half note.

The context model can also be represented in terms of a simple text file in the corresponding domain-specific language. Compare the following syntactical representation with the previously introduced graphical model.

composition {
    time 2/4, tonalCenter Cm
    {
        rhythm _8 8 8 8 2
        {
            scale minor
            {
                pitches 4 4 4 2
                pitches 3 3 3 1
                {
                    rhythmicExtension duration 2
                }
            }
        }
    }
}

Key Concepts

The preceding example demonstrates a few key aspects of the model:

These concepts, among other mechanisms, are further elaborated in the following sections.

Hierarchical Structures

Musical compositions are usually to some extent organized and perceived in hierarchically arranged units. Compositions can generally have multiple hierarchical levels of organization.

The hierarchical nature of context tree models is used for multiple purposes:

Inheritance

A very effective way to avoid redundant information is to harness a technique commonly used in object-oriented software development called inheritance. It involves defining hierarchical dependencies between object types in order to utilize already existing properties and/or functionality from another object type.

In MPS, the principle of inheritance is applied to musical context tree models. This is illustrated using a musical example. Consider the following score of the beginning of Queen’s Bohemian Rhapsody:

The score contains a some redundant information. For example, the parts are arranged homorhythmically, i.e. the rhythms of all four parts are exactly identical except for the end of the third measure. Also, the lyrics for all parts are exactly identical. In traditional scores, the composer or arranger has no other choice but to write the same rhythms and syllables all over again. In MPS context tree models however, the rhythm and the lyrics have to be specified only once and can be reused using various techniques. One of these techniques is inheritance, which is demonstrated in the following context tree model representing the first two measures of the piece:

The inheritance hierarchy is made visible by arrows and by the positions of the context nodes. Arrows are interpreted as „all inherited contexts are passed on to the node in direction of the arrow”. Inheriting nodes will normally drawn on the next hierarchy level which implies a lower position in the graph visualization. In this way, the instrument (vocals), the tonal center (B flat major), the rhythm, the context harmony (G minor seventh), and the lyrics („Is this the real life?”) are aggregated and passed on to the left parallelization node. It has four child nodes, which produce the four individual vocal parts of the first measure. They have different pitches, but have all the previously enumerated contexts in common. Using inheritance, all common contexts have to be specified only once, which is a major advantage of context tree models.

The same technique is used in the second measure, which inherits common instrument, tonal center, base rhythm, context harmony and lyrics contexts. Note that further optimization methods are used in the model, which are explained in the following sections.

Polymorphism

Another model concept inspired by object-oriented programming is polymorphism, which allows to override (and also to extend) particular parts of inherited functionality. In context tree models, this concept can be used to overwrite contextual information. To elaborate, another context tree model of Bohemian Rhapsody is shown in the following model, this time containing context information of the first four measures of the piece.

The time signature change in the third measure is modeled using a polymorphic construction. In the model, the 5/4 time signature context is positioned on a lower hierarchy level than the 4/4 time context at the top of the tree. The metric context 5/4 effectively overrides the 4/4 context temporarily (namely for one measure). After the subtree of the 5/4 measure is processed, the main 4/4 time signature becomes operative again. This technique can be applied to any musical context. For instance, temporary changes regarding meter, tempo, instruments, rhythms, pitches and harmonic contexts can be modeled.

Note that this representation has additional value compared to a purely sequential representation. In the context tree model, it is directly visible that the 4/4 meter is of higher importance in the composition than the 5/4 meter. In fact, it becomes apparent that the 5/4 meter is only used in terms of a temporary „excursus” from the standard meter of the piece and is used in a subsidiary manner.

Auto Expansion

When combining contexts defining musical sequences (e.g. rhythms, pitches or lyric syllables), these sequences do not necessarily need to have the same length. If the number of available rhythm notes, pitches and syllables does not match, the system will automatically apply a so called auto expansion. The consequence is that shorter sequences will automatically be repeated until the longest sequence is consumed completely. Consider the following example:

It results in the following score:

The language representation looks like this:

composition
{
    instrument vocals, time 6/8, key D
    {
        rhythm 4.
        {
            pitches(startOctave 5) 2
            {
                lyrics "Freu-de"
            }
            pitches(startOctave 5) 3 4 4 3 2 1 0 0 1 2
            {
                lyrics "schö-ner Göt-ter-fun-ken, Toch-ter aus E-"
            }
        }
        rhythm 5/8 8 4 _8 _4.
        {
            pitches(startOctave 5) 2 1 1
            {
                lyrics "ly-si-um"
            }
        }
    }
}

In the previous example, musical sequences of different lengths are combined. In particular, in the leftmost subtree combines the rhythm 4. (i.e. a dotted quarter note) with a pitch on the third scale degree (zero-based, i.e. 2) and the two lyric syllables Freu-de. While the rhythm and the pitch sequence only contain one element, the lyric sequence contains two syllables. The system automatically wraps and repeats the rhythm and the pitches until both syllables are processed. In sum, this results in two dotted quarter notes, both with the same pitch, but with different syllables.

Auto expansion was also used in the previous Bohemian Rhapsody example, in which the rhythm _8 8 8 8 4 4 is combined with the lyrics „Is this the real life?” and multiple pitch contexts for individual parts, namely pitches 2, pitches 0, pitches -1 and pitches -3. The rhythm contains six rhythmic notes, the first of which is a rest, leaving five assignable notes for syllables and pitches. The lyrics contain exactly five matching syllables. The pitch contexts, however, contain only one pitch each. Therefore, the pitches are repeated until the rhythm and the lyrics are consumed completely.

Using auto expansion, redundant musical sequences can be represented in an effective way, providing yet a useful compression method for context tree models.

Modularization using Fragments

Another technique to avoid redundant information in context tree models is modularization. To this means, arbitrary subtrees can be extracted into so called fragments. These are named subtrees which can be referenced from other places in the model. If subtrees occur multiple times in a model, they only have to be defined once in a fragment in order to be referenced at any place they are required.

As an example, consider the English horn theme of Antonin Dvorak’s Symphony No. 9 in E minor, „From the New World”, Op. 95, B. 178:

One possible context model for this score looks like this:

However, this model can be further optimized, as it contains some redundant information. Compare measures 1 and 3, which are exactly identical. The corresponding subtrees, i.e. the subtrees originating at the rhythms 8. 16 4, can be extracted to a fragment and referenced twice:

Contexts

Rhythms

Rhythm is one of the most central aspects in music. In the composition model, rhythms are represented as an individual context dimension and can be expressed using the corresponding domain-specific language. The syntax is very simple, yet powerful. Consider the following example:

rhythm _8 8 8 8 2

It yields the motif rhythm of Beethoven’s famous Symphony No. 5 in C Minor, Op. 67:

Of course, more complex rhythms can be defined using the language. Refer to the following table for a detailed explanation of note and rest duration syntax variants.

SyntaxExampleResultDescription
n (Integer Literal)2 4 8 16Integer literals are interpreted as reciprocal duration, e.g. 4 represents a quarter note, 8 an eighth note etc.
_ (Underscore Prefix)_2 _4 _8 _16Prefix to indicate that the following duration is to be interpreted as a rest duration.
n! (Integer Literal with ! Suffix)2!Integer literals followed by an exclamation mark are not interpreted as reciprocal duration but as literal duration, e.g. 2! specifies a duration of two whole notes.
. (Dot Suffix)8. 16 8.. 32Dots are used as suffixes to extend the preceding note or rest duration with a factor of 1.5. Multiple dots can be used in a row.
n/m (Fraction with Integer Numerator and Denominator)5/4Fractional note or rest duration, normally used if the duration can not be expressed as a canonical duration using simple fractions of two and dots.
~ (Tilde Suffix)1~ 4Suffix used to indicate that the current note is rhythmically tied to the following note.
(n/m: <durations>) (Tuplet)(3/2: 8 8 8)Specifies a tuplet in which n notes are played in the original duration of m notes. The adjacent example produces an eighth triplet. To compute the resulting durations, the original durations have to be multiplied with the fraction m/n. For example, in the case of the triplet, the note durations are multiplied with 2/3, yielding durations of 1/8 * 2/3 = 1/12 for each of the eighth triplet notes.

Examples

SyntaxResulting RhythmDescription
_8 8 8 8 2Ludwig van Beethoven, Symphony No. 5 in C Minor Op. 67, Motif Rhythm
4. 8 8 8 _4George Frideric Handel, Hallelujah Chorus from Messiah, HWV 56, Motif Rhythm
2 4 4 4. 16 16 4 _4Wolfgang Amadeus Mozart, Piano Sonata No. 16 in C major, KV 545, Opening Theme Rhythm
8 8 8 _8 8 8 _8 8 _8 8 8 _8Steve Reich, Clapping Music, Rhythmic Motif
4 _8 (3/2: 16 16 16) 4 _8 (3/2: 16 16 16) 4Wolfgang Amadeus Mozart, Symphony No. 41 in C major, K. 551}, Opening Theme Rhythm

Anacruses

Some musical phrases do not start directly on a metrically strong beat, but are preceded by one or more notes, which are referred to as anacrusis, also known as pickup or upbeat. Often this happens at the very beginning of a piece, yet also phrases in the middle of compositions can be initiated using pickup beats.

To indicate anacruses in MPS, the pickup beats simply are enclosed in parentheses. For example, the following code specifies a rhythm with an eigth note pickup beat:

rhythm (8) 8 8 8 16 16 4. _8

Consider the following model of Vivaldi’s Concerto No. 1 in E major, Op. 8, RV 269 known as Spring from the Four Seasons, in which anacruses at the beginning and in the middle of a phrase are specified.

The equivalent syntactic representation of the model is:

composition
{
    tonalCenter E
    {
        rhythm (8) 8 8 8 16 16 4. _8
        {
            pitches 7 9 9 9 8 7 11
        }
        rhythm (16 16) 8 8 8 16 16 4. _8
        {
            pitches 11 10 9 9 9 8 7 11
        }
    }
}

The result of this model is the following score (with anacruses at the beginning and before the second full measure marked in red):

Time Signatures

Time signatures are defined using the time keyword in combination with a fraction, for example:

time 3/4

Depending on the metric context, the very same rhythm can have different musical meanings. This is illustrated in the following example:

The corresponding source code looks like this:

composition
{    
    instrument snare
    {
        rhythm 4. 8 4
        {
            time 3/4
            time 6/8
        }
    }    
}

When compiling the model, the following score results:

It demonstrates that the very same rhythm can have different musical meanings depending on the metric context.

Tempo

Tempo is an individual context dimension which can be changed independently from time signatures. The tempo is specified in beats per minute (BPM). Example:

tempo 100 

By default, the BPM specification defines the temporal distance of quarter notes. It is also possible to define other note durations to which the BPM specification relates. To specify the tempo for eighth notes, for example, the following syntax is used:

tempo 80 noteDuration 8

The note duration syntax is the same as described in section Rhythms.

It is also possible to define gradual tempo changes, as demonstrated in the following example:

tempo 80 -> 110 noteDuration 8

Instruments

The instrument context defines by which instrument the musical material in the respective part of the model is played. Syntactically, this context is defined by the instrument keyword followed by an instrument identifier, such as:

instrument guitar 

Refer to the following section Available Instruments for a complete list of predefined instruments.

The following context model represents an excerpt of the famous Boléro by Maurice Ravel, in which a part of the melody is sequentially played by the flute and the clarinet:

Syntactically, this can be expressed as:

composition
{
    time 3/4
    {
        instrument flute
        {
            fragmentRef melody
        }
        instrument clarinet
        {
            fragmentRef melody
        }
    }    
}

fragment melody
{
    rhythm 4. 16 16 16 16 16 16, pitches 7 6 7 8 7 6 5
    rhythm 8 16 16 4. 16 16, pitches 7 7 5 7 6 7
    rhythm 16 16 16 16 9/16, pitches 5 4 2 3 4
    rhythm 16 16 16 16 16 16 16 4, pitches 3 2 1 2 3 4 5 4
}

The score looks like this. Note that the clarinet is notated in B flat.

For a version of this model in which the melody is played simultaneously, refer to section Parallelizations.

Available Instruments

The MPS library contains a number of predefined instruments, which are listed and described in the following sections.

Instruments with Variable Pitches

The following instruments are generally playable in different pitches depending on their compass.

IdentifierNameDescription
accordionAccordion
acousticBassAcoustic BassTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
acousticGuitarAcoustic GuitarTransposing Instrument: Sounds one octave higher than notated
acousticSteelGuitarAcoustic Steel GuitarTransposing Instrument: Sounds one octave higher than notated
altoSaxAlto Saxophone
altoSaxInEbAlto Saxophone in EbTransposing Instrument: Sounds a major sixth lower than notated
banjoBanjo
bassBass GuitarTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
bassClarinetBass Clarinet
bassClarinetInBbBass Clarinet in BbTransposing Instrument: Sounds a major ninth lower than notated
bassoonBassoon
bassPickedPicked Bass GuitarTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
baritoneSaxBaritone Saxophone
baritoneSaxInEbBaritone Saxophone in EbTransposing Instrument: Sounds a major thirteenth lower than notated
celestaCelestaTransposing Instrument: Sounds one octave higher than notated
celloCello
clarinetClarinet
clarinetInAClarinet in ATransposing Instrument: Sounds a minor third lower than notated
clarinetInBbClarinet in BbTransposing Instrument: Sounds a major second lower than notated
clarinetInEbClarinet in EbTransposing Instrument: Sounds a minor third higher than notated
contrabassoonContrabassoonTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
doubleBassDouble BassTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
drawbarOrganDrawbar Organ
electricGuitarElectric GuitarTransposing Instrument: Sounds one octave higher than notated
electricGuitarDistortedDistorted Electric GuitarTransposing Instrument: Sounds one octave higher than notated
electricGuitarJazzElectric Jazz GuitarTransposing Instrument: Sounds one octave higher than notated
electricGuitarMutedMuted Electric GuitarTransposing Instrument: Sounds one octave higher than notated
electricGuitarOverdrivenOverdriven Electric GuitarTransposing Instrument: Sounds one octave higher than notated
electricPianoElectric Piano
englishHornEnglish Horn
englishHornInFEnglish Horn in FTransposing Instrument: Sounds a perfect fifth lower than notated
fluteFlute
frenshHornFrensh Horn
glockenspielGlockenspielTransposing Instrument: Sounds two octaves higher than notated
harmonicaHarmonica
harpOrchestral Harp
harpsichordHarpsichord
hornHornSynonymously used for Frensh Horn
hornInFHorn in FTransposing Instrument: Sounds a perfect fifth lower than notated
oboeOboe
organChurch Organ
padPad (New Age)
pad2Pad (Warm)
panFlutePan Flute
percussiveOrganPercussive Organ
pianoPianoUsed as default if no instrument is specified
piccoloPiccoloTransposing Instrument: Sounds one octave higher than notated
reedOrganReed Organ
recorderSoprano RecorderTransposing Instrument: Sounds one octave higher than notated
recorderAltoAlto Recorder
recorderBassBass RecorderTransposing Instrument: Sounds one octave higher than notated, using bass clef for notation by default
recorderContrabassContrabass RecorderNotated using bass clef by default
recorderGarkleinGarklein RecorderTransposing Instrument: Sounds two octaves higher than notated
recorderGreatBassGreat Bass RecorderTransposing Instrument: Sounds one octave higher than notated, using bass clef for notation by default
recorderSopraninoSopranino RecorderTransposing Instrument: Sounds one octave higher than notated
recorderSubGreatBassSub-Great Bass RecorderTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
recorderSubContrabassSub-Contrabass RecorderTransposing Instrument: Sounds one octave lower than notated, using bass clef for notation by default
recorderTenorTenor Recorder
rockOrganRock Organ
sitarSitar
sopranoSaxSoprano Saxophone
tenorSaxTenor Saxophone
tenorSaxInBbTenor Saxophone in BbTransposing Instrument: Sounds a major ninth lower than notated
timpaniTimpani
tromboneTrombone
trumpetTrumpet
trumpetInDTrumpet in DTransposing Instrument: Sounds a major second higher than notated
trumpetInBbTrumpet in BbTransposing Instrument: Sounds a major second lower than notated
trumpetMutedMuted Trumpet
tubaTuba
vibraphoneVibraphone
violaViola
violinViolin
vocalsVocals
xylophoneXylophone
Untuned Percussion Instruments

The following instruments can generally not be played in different pitches:

IdentifierNameDescription
agogoHighHigh Agogo
agogoLowLow Agogo
bassDrumBass Drum
bassDrum2Bass Drum 2Alternative Bass Drum
bongoHighHigh Bongo
bongoLowLow Bongo
cabasaCabasa
chinaChina Cymbal
clavesClaves
congaHighHigh Conga
congaLowLow Conga
congaHighMutedMuted High Conga
cowbellCowbell
crashCrash Cymbal
crash2Crash Cymbal 2
cuicaCuica
cuicaMutedMuted Cuica
guiroShortShort Guiro
guiroLongLong Guiro
handClapsHand Claps
hiHatClosedClosed Hi-Hat
hiHatPedalPedal Hi-HatHi-Hat played via pedal
hiHatOpenOpen Hi-Hat
maracasMaracas
rideRide Cymbal
ride2Ride Cymbal 2
rideBellRide Cymbal Bell
sideStickSide Stick
snareSnare Drum
snareElectricElectric Snare Drum
splashSplash Cymbal
tambourineTambourine
timbaleHighHigh Timbale
timbaleLowLow Timbale
tomHighHigh Tom
tomHighMidHigh-Mid Tom
tomLowMidLow-Mid Tom
tomLowLow Tom
tomFloorHighHigh Floor Tom
tomFloorLowLow Floor Tom
triangleTriangle
triangleMutedMuted Triangle
vibraslapVibraslap
whistleShortShort Whistle
whistleLongLong Whistle
woodBlockHighHigh Wood Block
woodBlockLowLow Wood Block

Instrument Definitions

If additional instruments are required, users are able to define custom instruments by providing instrument definitions. Consider the following definition of an acoustic bass guitar:

instrumentDef acousticBass
{
    pitchRange [23..67]
    maxSimultaneousNotes 4
    scoreLabel "Bass"
    lilyPondInstrumentName "acoustic bass"
    defaultClef bass
    defaultOctave 2
}

The instrumentDef keyword is followed by an instrument identifier, which is used to reference the instrument definition in instrument contexts. For example, the acoustic bass can be referenced using the following syntax:

instrument acousticBass

Enclosed in curly braces, optional instrument parameters follow. Refer to the following table for descriptions of the individual parameters.

ParameterDescription
typeEither percussion in case of percussion instruments or synth for synthesizers used for electronic / electroacoustic music. Omit this parameter to create an instrument of default type which is playable in different pitches.
pitchRangeSpecifies the compass of the instrument in terms of MIDI notes in the syntax [lowest note..highest note].
maxSimultaneousNotesSpecifies the maximum number of notes which can be played simultaneously.
scoreLabelName of the instrument which is displayed at the beginning of staves in scores.
lilyPondInstrumentNameInstrument name used for assigning a MIDI instrument when exporting LilyPond scores. See LilyPond documentation.
defaultClefDefault clef to use in scores. Currently supported clef names are: treble, alto, tenor and bass.
defaultOctaveDefault MIDI octave to use if none is specified in composition models.

Pitches

MPS supports multiple types of pitch specifications. One possibility is to specify absolute pitches and octave numbers such as Ab5. Refer to the following table for a specification of octave numbers:

MIDI Note NumbersOctave NumberOctave Name
0-11-1Double Contra
12-230Sub Contra
24-351Contra
36-472Great
48-593Small
60-714One-line
72-835Two-line
84-956Three-line
96-1077Four-line
108-1198Five-line
120-1279Six-line

MIDI note numbers were specifically not chosen as pitch unit, since enharmonic differentiations are not possible. For instance, the pitch names G# and Ab correspond to same key on piano (assuming the same octave number is specified), but have different musical meanings relating to the harmonic context (see section Harmonic Contexts for more details). For this reason, harmonically significant pitch names are used. Alternatively, pitches may be given in terms of degrees on a scale, which is elaborated in section Scales.

The first two measures of W.A. Mozart’s Piano Sonata No. 16 in C major, K. 545, also known as Sonata Facile, are used as an example for pitch specifications using pitch names and octave numbers. Consider the following model:

The correspondent syntactical representation is:

composition
{    
    rhythm 2 4 4 4. 16 16 4 _4
    {
        pitches (startOctave 5) C E G B_4 C D C
    }
}

The resulting score is:

Various syntax alternatives for pitch specifications are listed in the following table:

SyntaxDescription
<note name>Used for specifying pitches explicitly, e.g. D, C# or Eb.
<integer number>Used for pitch specifications based on scale degrees. Refer to section Scales for more details.
# (suffix)Raises the previously specified pitch or scale degree by one semitone.
b (suffix)Lowers the previously specified pitch or scale degree by one semitone.
[<pitches>]Square brackets are used to specify chords. For example, a D major chord can be written as [D F# A].
@ (prefix)Indicates the usage of an expression to dynamically compute a pitch or scale degree. For example, the expression @getRootNote() evaluates to the root note of the current context harmony. Refer to section Expressions for more details.

Additional parameters may be used when specifying pitches, which are explained in the following table. If these parameters are used, they have to be syntactically enclosed in parentheses before pitches or scale degrees are specified, as demonstrated in the previous listing with the startOctave parameter.

ParameterDescription
startOctaveSpecifies the octave to use if no octaves are defined explicitly.
findNearestOctaveIf set to true, the system will change the octave automatically if it implies a smaller semitone distance to the previous note. Example: in the pitch sequence A C the system would start in the default octave yielding A4. With findNearestOctave enabled, the next pitch would be C5 because it has a smaller distance to A4 than C4.
relative toSpecifies which harmonic context is to be used to determine the context scale and its tonic. Possible values are key and harmony. Refer to sections Scales and Harmonic Contexts for more details.

Scales

An alternative to specifying absolute pitches is referring to pitches in terms of scale degrees. Consider the following example, which shows a context tree model of Bedřich Smetana’s Moldau Theme. The pitches in the model are defined in terms of zero-based scale degrees. The theme is referenced twice in the model: once from a minor scale context and once from a major scale context.

The compilation of the model results in the following score:

The model can be represented syntactically as follows:

composition
{
    time 6/8, key Em, instrument violin
    {
        scale minor
        {
            fragmentRef theme
        }
        scale major, key E
        {
            fragmentRef theme
        }
    }
}

fragment theme
{
    rhythm (8)
    {
        pitches 4
    }
    rhythm 4 8
    {
        pitches 7 8
        pitches 9 10
    }
    rhythm 4 8 4.
    {
        pitches 11
    }
    rhythm 4. 4.
    {
        pitches 12
    }
    rhythm 4. ~ 4 8
    {
        pitches 11
    }
    rhythm 4. 4 8 4 8 4 8 4. 4 8 4 _8 _4.
    {
        pitches 10 10 10 9 10 9 9 8 8 8 7
    }
}

Using scale degrees instead of absolute note names has several advantages:

  1. Scale degrees are syntactically easier and shorter to write.
  2. Thinking in terms of scale degrees is often more adequate regarding music theory and reflects the way most composers and musicians think about pitches.
  3. Scale degrees can be easily projected onto another scale. In other words, the same degrees can be used in another scale context, which allows interesting musical variations.

This is also the case for the Vltava model, in which the theme is presented in two scale contexts, namely a minor and a major version.

Note that the scale contexts used in the previous example are optional, because a default scale context is derived from the current key context automatically. In the left branch, the current key context is Em (E minor) which results in a matching minor scale context by default. In the right branch, the harmonic context is E: (E major) and therefore the default scale is _major. Refer to section Harmonic Contexts for more details.

Scale Definitions

MPS provides a number of built-in scales, which are listed in the following table:

NameIdentifierDegrees in Semitones
Majormajor0 2 4 5 7 9 11
Ionianionian0 2 4 5 7 9 11
Minorminor0 2 3 5 7 8 10
Aeolianaeolian0 2 3 5 7 8 10
Bluesblues0 3 5 6 7 10
Chromaticchromatic0 1 2 3 4 5 6 7 8 9 10 11
Diminisheddiminished0 1 3 4 6 7 9 10
Doriandorian0 2 3 5 7 9 10
Harmonic MajorharmonicMajor0 2 4 5 7 8 11
Harmonic MinorharmonicMinor0 2 3 5 7 8 11
Locrianlocrian0 1 3 5 6 8 10
Lydianlydian0 2 4 6 7 9 11
Major PentatonicmajorPentatonic0 2 4 7 9
Minor PentatonicminorPentatonic0 3 5 7 10
Melodic MajormelodicMajor0 2 4 5 7 8 10
Melodic MinormelodicMinor0 2 3 5 7 9 11
Mixolydianmixolydian0 2 4 5 7 9 10
Phrygianphrygian0 1 3 5 7 8 10
Whole-tonewhole0 2 4 6 8 10

If additional scales are required, users are able to define custom scales using scale definitions in the header section of composition files. Here is an example definition for the dorian scale:

scaleDef dorian
{
    degrees 0 2 3 5 7 9 10
}

Loudness

To account for the loudness dimension of music, MPS supports both static loudness contexts and gradual loudness contexts. The latter are used to model crescendo and decrescendo.

Static loudness specifications are syntactically described with the loudness keyword followed by a single loudness instruction such as

loudness ff

Refer to the following table for a enumeration of possible loudness specifications and mappings to common loudness units.

NameLiteralMIDI VelocityAmplitudeApproximated Sound Pressure Level in dB(SPL)
pppppppppppp40.033.78
pppppppppp80.067.56
pppppppp160.1315.12
pianopianissimoppp280.2226.46
pianissimopp400.3137.80
pianop520.4149.13
mezzopianomp640.5060.47
mezzofortemf760.6071.81
fortef880.6983.15
fortissimoff1000.7994.49
fortefortissimofff1120.88105.83
ffffffff1200.94113.39
ffffffffff1240.98117.17
ffffffffffff1271.00120.00

Gradual loudness specifications (i.e. crescendo and decrescendo) contain two loudness instructions delimited by the token -> such as

loudness p -> f

For gradual loudness instructions, the special loudness instruction current may be used, which refers to the last loudness level specified in the composition. This is also demonstrated in the following context tree model of W.A. Mozart’s Concerto for Flute, Harp, and Orchestra in C major, K. 299/297c.

The language representation of this model looks like this:

composition
{
    instrument oboe
    {
        loudness f
        {
            repeat 2
            {
                rhythm 4. 8 8 8 8 8
                {
                    pitches 14 11 9 7 9 11
                }
            }
            rhythm 4
            {
                pitches 14
            }
        }
        rhythm 4
        {
            loudness p
            {
                pitches 7 6 7
            }
            loudness f
            {
                pitches 8
            }
            loudness p
            {
                pitches 8 7# 8
            }
            loudness f
            {
                pitches 9
            }
            loudness p
            {
                pitches 9 8# 9
            }
            loudness current -> f
            {
                pitches 10 11 12 13
            }
        }
        loudness f
        {
            rhythm 7/4 _4
            {
                pitches 14
            }
        }
    }
}

This is the resulting score. Note specifically the crescendo resulting from a gradual loudness context in the sixth measure:

Harmonic Contexts

Harmonic contexts are especially important in western tonal music, in which pitches in compositions are usually organized in reference to specific keys. Matching scales and functions of specific chords can be derived depending on the key context. MPS supports explicit specifications of harmonic contexts including hierarchically arranged keys and contextual harmonies.

Keys

Keys serve as musical „landmarks” in tonal compositions. While simple pieces might only define one key, more complex compositions might incorporate temporary key changes (modulations) or even key changes for whole sections or parts of the piece, for instance compositions geared to the sonata form. Modulations and key changes can be modeled elegantly in MPS using hierarchical arrangements (as discussed in section Hierarchical Structures and Polymorphism ). In this way, the scope of the specified keys can be controlled using an arbitrary number of logical levels.

An example is provided in the following figure, which contains a schematic hierarchical arrangement of keys used in the first movement of Mozart’s Symphony No. 40 in G minor, K. 550. The global key of this movement is G minor. Themes are presented in the exposition in G minor and its relative major key Bb major. In the development, Mozart modulates through a number of keys starting with F# minor. The recapitulation concludes in the global key G minor.

Syntactically, keys are defined by referring to the root note name (for instance G or D#) and the optional suffix m indicating a minor harmony (e.g. Am or Bbm).

Harmonies

While keys provide a global harmonic context in tonal compositions, harmonic progressions provide local harmonic transitions. These can be expressed implicitly by specifying simultaneously sounding notes or in an explicit way, for example in the style of lead sheets (as shown in this example ).

The following context tree model defines a harmonic progression consisting of four local harmonies. These are hierarchically embedded in the global key context A minor.

The resulting score is shown below:

Syntactically, this can be written as:

composition
{
    key Am
    {
        rhythm 1
        {
            pitches(relative to harmony) [0 2 4]
            {
                harmony Am
                harmony G
                harmony F
                harmony E
            }
        }
    }
}

The complexity of harmonies is not limited to major and minor chords. MPS supports additional notes and harmony specifications as specified in the following table:

SyntaxDescription
<integer number>Additional harmony note relative to the root note, expressed in terms of scale degrees. For example, F7 translates to a F major chord with added minor seventh.
# or b (prefix)Optional prefix for additional harmony notes to indicate a semitone correction upwards or downwards, respectively.
maj7Adds a major seventh relative to the root note.
m7Indicates a minor chord with a minor seventh.
sus2Suspended second chord in which a perfect second is added and the third is omitted.
sus4Suspended second chord containing a perfect fourth but no third.
°Diminished chord
+Augmented chord
powerPower chord containing only the root and the fifth. Frequently used in rock and metal genres.

Note that these additions can be combined, for instance A7sus4 defines a harmony with the notes A, D, E and G. Refer to section Harmonic Modifiers for more examples demonstrating harmony additions.

Harmonic Progressions

In certain cases it is convenient to specify a harmonic sequence as a whole. In MPS, this is possible using the harmonicProgression keyword in combination with a harmonicRhythm instruction defining the duration of each harmony in the progression. This is demonstrated in the following context tree model:

It results in an equivalent score as the previous example in section Harmonies.

The following code contains the corresponding language representation:

composition
{
    key Am
    {
        harmonicProgression Am G F E, harmonicRhythm 1 1 1 1
        {
            rhythm 1 1 1 1
            {
                pitches(relative to harmony) [0 2 4]
            }
        }
    }
}

Lyrics

In vocal music, sung notes are normally associated with syllables, which is considered as a separate context dimension in the MPS model. Syllables are provided using a simple word-based syntax. To distribute syllables of a word onto multiple notes, hyphens (-) may be used. Syllable assignments for specific notes can be skipped using underscores (_). As an example, the first measures of the song Hey Jude by the Beatles is used. The context tree model looks like this:

Is can be represented with the following syntax:

composition
{
    tonalCenter F
    {
        rhythm (4) 2 _8 8 8 8 2 _2
        {
            pitches 4 2 2 4 5 1
            {
                lyrics "Hey Jude don't make it bad"
            }
        }  
        rhythm (8 8) 4 4. 8 8 8 8 16 16 2 _4
        {
            pitches 1 2 3 7 7 6 4 5 4 3 2
            {
                lyrics "take a sad song and make it bet-te--r"
            }
        }
    }
}

It results in the following score:

Custom Contexts

MPS offers the feature to create arbitrary custom contexts. An example is shown in the following model:

The context tree model contains three sections which individual moods are described by means of custom context nodes. Custom contexts are syntactically defined by the keyword customContext, followed by a context identifier (in this case mood) and a string literal containing the value for the context. Refer to the following listing for the corresponding language representation:

composition
{
    fragment section1
    {
        customContext mood "vivid"
    }
    
    fragment section2
    {
        customContext mood "melancholic"
    }
    
    fragment section3
    {
        customContext mood "maestoso"
    }
    
}

Custom contexts are visually represented as separate layers in models. Scores generated from models containing custom contexts will contain textual annotations such as ``Mood: vivid'' at the top of the relevant staves.

Context Modifiers

Frequently, already introduced musical material is slightly changed and shaped in the course of compositions. In these cases, no fundamentally new ideas are introduced, but existing ones are modified. To account for this, so called context modifiers allow to adjust already existing musical material. Their functionality is explained in the following subsections.

By default, modifiers are applied to the next matching context above the modifier node. If the modifier should also be applied to nodes beneath it, add the keyword recursive after the modifier specification.

Rhythmic Modifiers

Rhythmic context modifiers have the purpose of manipulating existing rhythmic contexts in a musical composition.

Augmentations and Diminutions

Rhythmic augmentation involves prolonging the note lengths of a given rhythm by multiplying the original lengths with a constant factor, typically 2. However, other scale factors are possible. A rhythmic diminution is considered as the opposite of a rhythmic augmentation, i.e. the note lengths are not extended but shortened by a constant factor.

The following example demonstrates a model of a subject being transformed using diminution and inversion. It can be found in J.S. Bach’s Die Kunst der Fuge, BWV 1080, Contrapunctus VII.

The language representation looks as follows:

composition
{
    key Dm
    {
        parallel
        {
            fragmentRef soprano
            fragmentRef tenor
        }
    }
}

fragment soprano
{
    rhythm _1
    inversion 11
    {
        fragmentRef subject    
    }
}

fragment tenor
{
    diminution, scale melodicMinor
    {
        fragmentRef subject
    }
}

fragment subject
{
    rhythm 2 4. 8 4. 8 2 2 4. 8 5/8 8 8 8 4 _4 _2
    {
        pitches 0 4 3 2 1 0 -1 0 1 2 3 2 1 0
    }
}

The following results from this model:

Rhythmic Extensions

Rhythmic modifiers are used to extend the duration of the last note or rest in a rhythm. This modifier was already demonstrated in the context tree model for Beethoven’s Symphony No. 5 in C Minor, Op. 67 in section Introductory Example.

Syntactically, rhythmic extensions are specified using the keyword rhythmicExtension, followed by a note duration as explained in section Rhythms. If the note duration is positive, the rhythm is extended. If the note duration is negative, the rhythm is shortened by the absolute value of the given negative duration.

Rhythmic Adjustments

Rhythmic adjustment modifiers allow to modify the rhythm in the current context at the beginning and at the end. The modifications are specified by means of two durations for the beginning and the end of the rhythm, respectively. It is possible to specify both or only one of the parameters. Refer to the following table for detailed parameter descriptions.

ParameterDescription
startDeltaSpecifies how the rhythm is modified at the beginning. If startDelta is positive, the rhythm will start from the given time, effectively shortening the rhythm by startDelta. If startDelta is negative, the first note or rest of the rhythm will be extended.
endDeltaSpecifies a duration for the adjustment of the end of the rhythm. If endDelta is positive, the rhythm is extended; if endDelta is negative, the rhythm is shortened. The behaviour is identical with the rhythmicExtension modifier introduced in section Rhythmic Extensions.

Rhythmic Insertions

This modifier inserts a rhythm into the contextually present rhythm. This can either happen in an additive manner, whereupon existing notes and rests are shifted to the right, or in a destructive manner, whereupon existing elements are overwritten.

A rhythmic insertion was already demonstrated in Queen’s Bohemian Rhapsody in section Inheritance. Refer to the score in this section and compare the rhythms in the first and the second measure, which both start off with three eighth notes, but continue differently. In the model this is expressed using a rhythmic insertion. It is used in the right subtree, which represents the specifics of the second measure. The rhythm 8 16 5/16 is inserted into the basic rhythm _8 8 8 8 4 4 at offset 2, i.e. after the duration of a half note, effectively replacing the two quarter notes with the specified rhythm. The following table contains explanations for all parameters of this modifier.

ParameterDescription
offsetSpecifies after which duration the insertion should be applied to the rhythm.
rhythmDefines the rhythm to be inserted in the syntax introduced in section Rhythms.
modeEither insert to shift existing notes and rests after the insertion to the right or overwrite to overwrite existing elements.

Rhythmic Displacements

Rhythmic displacement modifiers are used to translate existing rhythms by moving them to the right or to the left in itself. The modifier takes a note duration offset and a mode specification as parameters, which is explained in detail in the following table.

ParameterDescription
offsetDefines the rhythm translation offset. For positive durations, the rhythm is shifted to the right, for negative durations to the left.
modeIn discard mode, notes moved over the rhythm’s boundary are removed. In wrap mode, the notes are appended to the other end of the rhythm.

As an example, consider Steve Reich’s composition Clapping Music, in which a rhythmic motif is repeatedly performed by two players. For the second player, the rhythm is iteratively shifted and wrapped, resulting in twelve rhythmic variations. The following context tree model contains a repeatedly applied rhythmic displacement modifier:

The following score results:

The syntactical representation of the model follows:

composition
{
    time 12/8, tempo 168
    {
        instrument handClaps
        {
            parallel
            {
                repeat 13
                {
                    fragmentRef motiv
                }
                for n in 0 to -12 step -1
                {
                    fragmentRef motiv
                    {
                        rhythmicDisplacement mode wrap offset n/8
                    }
                }
            }
        }
    }
}

fragment motiv
{
    repeat 4
    {
        rhythm 8 8 8 _8 8 8 _8 8 _8 8 8 _8
    }
}

Pitch Modifiers

Pitch modifiers are used for manipulating contexts in the musical pitch dimension.

Transpositions

Transpositions have the effect of modifying contextually available pitches. The modifier can be applied in three modes in order to support semitone-based transpositions, scale-based transpositions and octave translations. All parameters are explained in the following table:

ParameterDescription
modeDefines the unit of the interval expression. Three modes are available: absolute for semitone-based transpositions, inScale to perform transpositions of scale degrees and octaves for octave translations. If the parameter is not specified, the default absolute will be used.
intervalExpression which must be interpretable as an integer number. The unit of this number is defined by the mode parameter.

Refer to section Sequences for an example demonstrating various transposition techniques.

Inversions

Inversions were already demonstrated in section Augmentations and Diminutions in conjunction with a diminution using J.S. Bach’s Die Kunst der Fuge, BWV 1080, Contrapunctus VII as an example.

Parallel Intervals

Parallel interval modifiers add simultaneously audible pitches in a specific interval to existing pitches. The intervals can be specified in terms of semitones, scale degrees or octaves. As an example, a context tree model of the guitar intro of Deep Purple’s Smoke on the Water is demonstrated:

The language representation of this model is:

composition
{
    time 4/4, tempo 110
    {
        instrument electricGuitarOverdriven
        {
            key Gm
            {
                scale blues
                {
                    parallelInterval mode absolute -5 recursive
                    {
                        fragmentRef fragment1
                        
                        rhythm _8 8 _8 8 _8 8 4 _4
                        {
                            pitches 0 1 3 2
                        }
                        
                        fragmentRef fragment1
                        
                        rhythm _8 8 _8 4. _2
                        {
                            pitches 1 0
                        }
                    }
                }
            }
        }
    }
}

fragment fragment1
{
    rhythm 8 _8 8 _8 4
    {
        pitches 0 1 2
    }
}

The model results in the following score:

The main melodic motif is notated in terms of degrees on the minor blues scale, which consists of the minor pentatonic scale with an added „blue note” between the 3rd and 4th scale degree:

The upper notes of the famous Smoke on the Water riff can be specified in terms of scale degrees on the G minor blues scale. When analyzing the distance between the notes, it becomes apparent that the lower notes have a constant distance to the upper notes, namely five semitones or a perfect fourth. It is therefore convenient to specify this circumstance rather than specifying each lower pitch manually. Refer to the following table for a detailed description of parallel interval modifier parameters.

ParameterDescription
modeSpecifies the interval unit. Available modes are absolute (in semitones), inScale (for scale-specific parallel intervals) and octaves.
intervalExpression to define the parallel interval. The expression must be interpretable as integer number. See section Expressions for more details.

Note that the first and third measure are exactly identical, which is why the individual musical contexts of these measures were extracted to a fragment and referenced twice, as already described in section Fragments.

Harmonic Modifiers

Harmonic modifiers are used to extend or alter contextually accessible harmonies. In the following context tree model, various harmony modifications of the base harmony A major are demonstrated:

The resulting chords of the modifications are: A major, A7, Amaj7, A augmented and A diminished. Compare the model with the resulting score:

Refer to section Chord Generators for details on the chordGenerator.

Context Generators

The purpose of context generators is to create new contexts based on already existing contexts. For example, pitch contexts can be built based on harmonic contexts, as explained in the following sections.

Chord Generators

Chord generators create pitch contexts representing specific chord inversions for contextually available harmonies. Refer to the following model for an example, in which an abstract chord progression is defined using Roman numerals.

Concrete chord inversions are derived using a chord generator, resulting in the following score:

In the language, this model can be expressed as follows:

composition {
    key E
    {
        harmonicProgression I IV ii V7 I
        {
            harmonicRhythm 4 4 4 4 1
            {
                rhythm 4 4 4 4 1
                {
                    chordGenerator
                }
            }
        }
    }
}

Chord generators can be flexibly configured for various musical applications. All possible parameters are described in the following table:

ParameterDescription
startOctaveDefines the octave in which the lowest note of the first chord is generated.
startInversionSpecifies the default inversion of this chord generator. 0 corresponds to the root position, 1 to the first inversion etc.
numberOfNotesDefines how many notes are generated for each chord. If this parameter is not specified, the minimum number of notes to express a harmony adequately are used. For example, three notes are used for major or minor chords but four notes for a dominant seventh chord.
includeBassNoteIf set to true, the bass note (which in some cases can be different from the root note) is included in chords.
findNearestInversionIf set to true, the system will minimize the distance between successive chords. In other words, inversions with a minimum aggregated semitone distance to the previous chord will be used.

Arpeggio Generators

Arpeggio generators are specialized chord generators which allow to distribute individual notes of generated chords sequentially in time. A simple example is demonstrated in the following context tree model:

The resulting score is:

The corresponding language representation is:

composition
{
  time 6/8, harmony Em
  {
    rhythm 8 8 8 8 8 8
    {
      arpeggioGenerator (numberOfNotes 4 noteIndexSequence 0 1 2 3 2 1)
    }
  }
}

Internally, arpeggio generators determine concrete chord inversions just like chord generators. Therefore, all parameters of chord generators (see section Chord Generators ) can be applied to arpeggio generators. However, instead of generating simultaneously played notes, arpeggio generators produce sequentially played notes in a contextually available rhythm. For this purpose, the generator sequentially chooses notes from the current chord. By default, notes are chosen in ascending order and this sequence is wrapped if more notes are required. For example, for a D minor chord (D-F#-A) and a rhythm with four notes, the resulting arpeggio sequence would be D-F#-A-D.

The sequence of the selected notes can be influenced with the so called note index sequence. Each note in the chord is assigned a zero-based index (e.g. for the above mentioned example the indices would be: D ⇒ 0, F# ⇒ 1, A ⇒ 2). To produce descending instead of ascending arpeggios, the default note index sequence 0 1 2 could be changed to 2 1 0. In the previous example, the note index sequence 0 1 2 3 2 1 is used which results in an alternating ascending and descending arpeggio.

A more complex example is demonstrated in the following model:

The model produces the first four measures of J.S. Bach’s well-known Prelude in C Major, BWV 846. Two separate arpeggio generators are used to generate independent arpeggios for the left and the right hand. An advanced feature is used in the third chord (used in the third measure). The harmony is specified as G7 with B in the bass. Additionally, a so called note exclusion with the syntax -B is specified. It instructs the compiler to skip the relevant note during the chord inversion computing process. As can be seen in measure 3 in the following score, the note B is not present in the arpeggio. To account for this, specific notes can be excluded from the chord generation process.

Control Structures

Control structures can be utilized to dynamically reuse contexts in context tree models with the help of loops, iterative modifications and other algorithmic constructs, which are explained in detail in this section.

Parallelizations

Parallelizations are used to indicate that tree branches below are not to be evaluated sequentially, but in parallel. This results in individual musical streams resulting in multiple parts or voices being played simultaneously.

As an example, a parallel version of an already introduced context tree model is shown. The following model uses a parallelization node to purpose the melody being played simultaneously by flute and clarinet:

This is syntactically accomplished with the parallel keyword:

composition
{
    time 3/4
    {
        parallel
        {
            instrument flute
            {
                fragmentRef melody
            }
            instrument clarinet
            {
                fragmentRef melody
            }
        }
    }
}

fragment melody
{
    rhythm 4. 16 16 16 16 16 16, pitches 7 6 7 8 7 6 5
    rhythm 8 16 16 4. 16 16, pitches 7 7 5 7 6 7
    rhythm 16 16 16 16 9/16, pitches 5 4 2 3 4
    rhythm 16 16 16 16 16 16 16 4, pitches 3 2 1 2 3 4 5 4
}

The resulting score is:

Compare with the model already presented in section Instruments, which results in sequentially played melodies.

Repetitions

Repetition is a frequently utilized technique in music composition and is applied in a variety of forms. A common form of repetitions is known from musical scores, in which repeat signs indicate that a section of the score is to be played again (as an example, refer to section Rhythmic Displacements ).

In MPS, arbitrary subtrees of contexts can be repeated, which can be applied to single contexts or combinations of musical contexts. Furthermore, repetitions can be nested hierarchically. This is demonstrated using a context tree model of a simple drum groove:

The corresponding language representation is:

composition
{
    time 4/4, tempo 100
    {
        repeat 2
        {
            parallel
            {
                instrument hiHatClosed
                {
                    repeat 8
                    {
                        rhythm 8
                    }
                }
                instrument snare
                {
                    repeat 2
                    {
                        rhythm _4 4
                    }
                }
                instrument bassDrum
                {
                    rhythm 4 _4 8 8 _4
                }
            }
        }
    }
}

The model produces the following score:

The model contains nested control structures to repeat context subtrees. The outer structure (repeat 2) repeats the whole measure produced by the subtree below the parallel element. It produces musical material for closed hi-hats, bass drum and snare. A nested repetition resulting in 8 eights notes is specified for the hi-hats. Also, the snare drum repeats the rhythmic pattern of a quarter rest followed by a quarter note (rhythm _4 4) twice, which is also expressed as a nested repetition. In this manner, repetitions of musical context subtrees can be hierarchically nested in arbitrary complexity.

The repeat count can be bound to a variable, which can be utilized to introduce conditional contexts. This technique is demonstrated in the following section.

Conditions

Condition nodes can be used to define conditional contexts. Therefore, an expression is defined which is evaluated to a boolean expression, yielding either true or false. Depending on the result, a different context tree branch is used. This is illustrated in the following context tree model, which produces the drum intro of Coldplay’s In My Place.

The resulting drum part is:

Syntactically, this model can be expressed as:

composition
{
    time 4/4, tempo 72
    {
        repeat 2 as outerCount
        {
            parallel
            {
                fragment cymbals
                {
                    repeat 8 as innerCount
                    {
                        rhythm 8 
                        {
                            if outerCount == 1 and innerCount == 1
                            {
                                instrument crash
                            }
                            else
                            {
                                instrument hiHatOpen
                            }
                        }
                    }
                }

                instrument bassDrum
                {
                    rhythm 4 _8. 16 _16 16 8 _4
                }
                
                instrument snare
                {
                    rhythm _4 8. _16 _4 4    
                }
                
            }
        }
    }
}

The contexts for the cymbals are specified conditionally in this context model. A condition based on the current repetition counts of an outer and an inner repeat control structure is specified. It evaluates to true if both the outer and inner repetition count is 1. If this is the case, a crash cymbal is used as instrument context. In all other cases, the open hi-hats are played. In the two measures shown in in the drum part, it can be seen that the condition evaluates to true only in the first measure on the first beat, on which a crash cymbal is played. On all other beats, especially on the first beat in the second measure, an open hi-hat is played because the outer repetition count evaluates to 2 in the second measure.

Condition expressions can be based on arbitrary variables defined in any context nodes which are hierarchically placed above the current condition node. Notably, results of function calls can be used to create dynamically modeled compositions using conditional contexts. Refer to section Function Calls for more details.

Iterations

Iterations are used to create loops in which musical material is iteratively modified. The control structure resembles for loops in general purpose programming languages. Iterations define a control variable which typically changes its value in every loop iteration. The following model and the corresponding code demonstrate an iteration producing a G minor blues scale, which was already introduced in section Parallel Intervals.

The language representation of this model is:

composition
{
    key Gm
    {
        scale blues
        {
            rhythm 4
            {
                for degree in 0 to 6
                {
                    pitches @degree
                }
            }
        }
    }
}

Also refer to section Rhythmic Displacements, in which a rhythmic pattern is iteratively displaced using a corresponding control structure and a suitable rhythmic modifier.

Sequences

MPS provides a separate control structure for melodic sequences. Technically, melodic sequences are translated to an iteration with nested transpositions. The following context tree model represents a sequence from J.S. Bach’s Invention No. 4 in D minor, BWV 775:

The model can be syntactically represented as follows:

composition
{
    time 3/8, key Dm
    {
        parallel
        {
            rhythm 16
            {
                pitches 9 7 8 9 10 11 5 11 10 9 8 7
                {
                    sequence 2 times step -1 mode inScale
                }
            }
            rhythm 8
            {
                pitches 0 7 2 3 4 5, transpose mode octaves -1
                {
                    sequence 2 times step -1 mode inScale
                }
            }
        }
    }
}

The model produces the following score:

Sequence control structures are applied both to the right hand and the left hand voice. Both sequence control structures are applied twice (2 times). In the first iteration, the specified pitches are adopted without modification. In the second iteration, the pitches are transposed one step down. Consequently, the scale degrees of both voices are diatonically transposed down in parallel. Refer to the following table for detailed parameter descriptions.

ParameterDescription
timesSpecifies how often the sequence is repeated.
stepDefines the offset of the iteratively applied transposition. The unit of this expression is defined by the mode parameter.
modeDefines the unit of the interval expression. Three modes are available: absolute for semitone-based transpositions, inScale to perform transpositions of scale degrees and octaves for octave translations. If the parameter is not specified, the default absolute will be used.

While-Loops

The contents of while-loops are applied as long as a specified condition is fulfilled. An example is demonstrated in the following model:

A possible result is shown below:

The syntax representation of this model is:

composition
{
    while getMeasureNumber() <= 2
    {
        rhythm 8
        {
            pitches @getRandomInteger(0, 7)
        }
    }
}

The loop is applied while the measure number is less than or equal to 2 (i.e. in the first two measures). The current measure number can be retrieved using the function getMeasureNumber(). Pitches are chosen randomly using another function call to getRandomInteger(). Refer to section Function Calls for more details.

Switches

This control structure selects and processes only one of the specified child tree branches for each invocation. If the structure is encountered again (e.g. due to a repeat), the next child branch is processed. If no more child branches are available, processing continues from the first child branch again.

An example is provided in the following context tree model, in which the same melody is repeated three times. The switch control structure applies three different lyrics contexts for each loop iteration. Consequently, each time the switch is encountered, other lyrics are produced in the right subtree.

The corresponding language representation is:

composition
{
    instrument vocals
    {
        repeat 3
        {
            rhythm (4) 4 _2.
            {
                pitches 4 2
                {
                    lyrics "Hey Jude"
                }
            }
            rhythm (8 8 8) 4 _2.
            {
                pitches 2 4 5 1
                {
                    switch
                    {
                        lyrics "don't make it bad"
                        lyrics "don't be af-raid"
                        lyrics "don't let me down"
                    }
                }
            }
        }
    }
}

The model produces the following score:

It is also possible to define other non-consecutive processing sequences. This is done by specifying a so called child index sequence, as shown below:

switch childIndexSequence 0 0 1

The previously specified switch will process the first child branch twice, followed by the second child branch. If invoked again, processing will start over at the beginning of the custom sequence.

Expressions

Expressions are used to represent dynamically computable parameters in context tree models. These are especially useful for algorithmic composition, in which certain musical parameters are computed based on mathematical rules. MPS uses a custom expression language supporting logical and arithmetic expressions with variables and function calls.

Literals

A basic unit of information in the expression language is given in the form of literals. Refer to the following table containing a summary of available literal types.

TypeDescriptionInternal Type
booleanBoolean value. Permitted literals are true and false.boolean
integerInteger number with optional negative sign, such as 42, -23 or 0.int
floatFloating point number with optional negative sign, such as 3.1415 or -2.1.double
fractionFraction represented by an integer numerator and integer denominator, for instance 1/4. Arithmetic divisions automatically result in a fraction if both operands are integer numbers.Fraction
stringRepresents a sequence of zero or more characters encoded in UTF-16.java.lang.String

Operators

The system supports boolean operators, comparison operators and arithmetic operators. The operators are listed in the following table ordered by operator priority, from highest to lowest precedence.

OperatorDescription
!Unary boolean negation operator. For example, !true evaluates to false.
-Unary arithmetic negation. For example, -(2+1) evaluates to -3.
*Arithmetic multiplication
/Arithmetic division. Results in a fraction if both operands are integer numbers.
%Modulo operator
+Arithmetic addition. May also be used to concatenate strings.
-Arithmetic subtraction
==Evaluates to true if the left operand is equal to the right operand.
!=Evaluates to true if the left operand is not equal to the right operand.
<Evaluates to true if the left operand is less than the right operand.
>Evaluates to true if the left operand is greater than the right operand.
<=Evaluates to true if the left operand is equal to or less than the right operand.
>=Evaluates to true if the left operand is equal to or greater than the right operand.
andBoolean and operator. Result of the expression is true if and only if both operands evaluate to true.
orBoolean or operator. Result of the expression is true if at least one of the operands evaluates to true.

Parentheses may be used for custom operator prioritization, for example:

(2 + 3) * 4

In the previous expression, the term 2+3 is evaluated first and the result is multiplied with 4. If no parentheses would be used, 3*4 would be evaluated first due to higher precedence of the multiplication operator.

Type Conversions

Expressions are dynamically casted if required. For example, to evaluate the following expression, several dynamic type casts are applied.

1 + 0.7 > 3/4 and !(n % 2)

To sum 1 + 0.7, 1 is implicitly converted to a floating point number. To evaluate the comparison 1.7 > 3/4, 1.7 is automatically converted to the fraction 17/10. The result of the left-hand comparison 17/10 > 3/4 yields true. The modulo operation on the right hand side results in the remainder of n being divided by 2. The remainder is wrapped in a boolean negation. This implies that the remainder must implicitly be cast to a boolean expression. It evaluates to false if the remainder is equal to zero and to true otherwise. The boolean result of this implicit cast is negated and then used as right operand for the and conjunction. The right hand side of the and-operator can also be read as: „if n is dividable by 2”. Refer to following table for an overview of implicit type conversion rules.

Type 1Type 2Resulting Type
booleanintegerinteger
booleanfloatfloat
booleanfractionfraction
booleanstringstring
integerfloatfloat
integerfractionfraction
integerstringstring
floatfractionfraction
floatstringstring
fractionstringstring

The following table specifies the applied transformations. Source types are listed on the left, target types are listed in the columns on top.

booleanintegerfloatfractionstring
boolean-false ⇒ 0, true ⇒ 1false ⇒ 0.0, true ⇒ 1.0false0/1, true1/1false ⇒ „false”, true ⇒ "true"
integerfalse if equal to 0, true otherwise-as specified by doubleValuen/1as specified by valueOf
floatfalse if equal to 0.0, true otherwiseNearest integer below the value of the floating point number-Nearest computable fraction as specified by Fractionas specified by valueOf
fractionfalse if fraction is equal to 0/1, true otherwiseWhole number part of the fraction as specified by intValueas specified by doubleValue-as specified by toString
stringfalse if string is empty, true otherwiseas specified by parseIntas specified by parseDoubleSupported if string contains two integer numbers separated by a slash (/) or a single integer number-

Function Calls

Functions are used to dynamically retrieve musical context information. They are evaluated during the compilation process (see section Rendering Context Layer Model Representations ). The returned values depend on the given parameters, the stream context and the temporal context in which they are invoked. Refer to the following table for an overview of available functions.

SignatureReturn TypeDescription
chordsAvailable()booleanReturns true if context harmonies are available in the current context, false otherwise.
getRootNote()NoteReferenceReturns the root note of the current context harmony.
getBassNote()NoteReferenceReturns the bass note of the current context harmony, which can in some cases be different from the root note.
getRandomBoolean()booleanReturns a random boolean value, i.e. true or false.
getRandomInteger(min, max)integerReturns a random integer value between min (inclusive) and max (exclusive).
getRandomDouble(min, max)doubleReturns a random double value between min (inclusive) and max (exclusive).
getTime()fractionReturns the current time in the current stream in terms of note duration (e.g. after a quarter note, the elapsed time is 1/4). Refer to section Time Model for more details.
getTimeSignature()TimeSignatureReturns the current time signature.
isInFragmentContext(string)booleanReturns true if the current context stack contains the fragment with the given name, false otherwise.