• Ei tuloksia

Computer-based musical data representation systems

3.5 The Music V sound synthesis language

Through its data representation system, Music V (Mathews 1969) enables the user to construct synthetic instruments and supply them with a score that describes a detailed musical performance. The representation of Music V is a specialized programming language which provides instructions for synthesizing sound.

Music V instruments are constructed by the combining of unit generators, which can be regarded as software equivalents of analog synthesizer modules.

Each unit generator performs a primitive signal generation or modification task.

Among available unit generators are an oscillator, a set of filters, a random noise generator, and an envelope generator. For example, oscillators may be combined to modulate each other’s frequency or output amplitude. Moreover, the output of several oscillators may be mixed to create complex sounds. In the Music V rep-resentation, each instrument is given a unique identification number so that it can be referred to during a performance. In principle, Music V imposes no limits on either the amount of unit generators per instrument or the number of instru-ments in an orchestra.

The musical performance is controlled by a set of timed events called notes.

Each note consists of timing information (such as the onset time and duration of the note) and parameter data specific to the individual instruments. The use of nearly all of the note’s parameters must be explicitly defined by the instrument.

Music V stores its data in a set of records. Each record is written on a single line of text and is divided into a set of fields, which are ordered from left to right.

There, the first field represents the record type (also called an “operation code”).

Individual types are reserved for each unit generator and for note events. The interpretation of other fields depends on the record type and/or instrument.

In note events the record type is “NOT”. The second field contains the num-ber of the instrument that plays the note. The third field contains the start time, in beats, of the note and the fourth field contains the duration of the note. The

rest of the fields may contain values for various synthesis parameters depending on the instrument’s construction. An arbitrary amount of notes of any duration can be played simultaneously.

Typically, a note event contains parameters to specify pitch and amplitude specifically. Also, parameters might be added for controlling tone, vibrato rate and depth, and so forth.

Below is a sample program written for Music V. These programs are called scores in Music V terminology (Mathews 1969: 44). A Music V score consists of two parts: instrument definitions that comprise an orchestra and a set of notes to be played by the instruments.

INS 0 1 ;

OSC P5 P6 B2 F1 P30 ; OUT B2 B1 ;

END ;

GEN 0.00 1 1 1 ;

NOT 0.00 1 1.00 1000 3.03 ; NOT 1.00 1 1.00 1000 3.82 ; NOT 2.00 1 1.00 1000 5.54 ; NOT 3.00 1 3.00 1000 3.03 ; TER 6.00 ;

In Music V scores, each record contains a data statement. A record is terminated by a semicolon. Records consist of fields that are separated by spaces. Fields are ordered from left to right and can be referred to by a numeric identifier. For example, the leftmost field’s identifier is P1, the next field’s identifier is P2 etc.

Field P1 contains an operation code and other fields contain parameters for the data statement.

The first four records form a definition for instrument number 1. The INS operation code begins an instrument definition. P2 of the INS statement speci-fies the time at which the statement is executed. P3 specispeci-fies a numeric identifier for the instrument. In the example, the instrument definition starts at time 0 (i.e., immediately at the start of the performance). The instrument is given 1 as its ID number. The OSC record defines an oscillator that will be used as a sound gener-ator. The parameters of the OSC statement are amplitude, pitch, output, wave-form function, and sum, respectively. In the example, amplitude and pitch values are controlled by, respectively, the P5 and P6 fields of note records. In other words, each note specifies its own amplitude and pitch.

The oscillator output is copied to a memory block named B2. The oscillator waveform function is F1, which is defined in the GEN record on the fifth line.

P30 is used as a temporary storage space for the oscillator. The OUT record on

line 3 is used for connecting the oscillator output signal, stored in B2, to a main signal output memory block B1. The END record terminates the instrument def-inition. The GEN record defines a waveform function as one period of a sine wave.

In Music V oscillators, pitch is specified by a coefficient that determines a fundamental frequency. The frequency can be calculated as: f0 = (fs * p) / N(F), where f0 is the fundamental frequency in Hertz, fs is the sampling rate of the resulting audio signal, p is the pitch coefficient and N(F) is a memory “block size”, i.e., the amount of samples reserved for storing wavetable functions (Mathews 1969: 127). In the example, a sampling rate (fs) of 44100 and a block size of 512 are assumed. Setting p to 3.03 would yield a frequency of 260.98 Hz, which is close (with a small round-off error) to a middle C in an equally tem-pered scale with tuning reference level of A = 440 Hz. Adding more decimals to p would yield a more precise definition of pitch.

The NOT records in the score will play four consecutive tones on the instru-ment 1. A respective score in music notation is shown in figure 3-1.The TER record terminates the score at beat 6.00.

Semantically, a note in Music V is equivalent to a sound event. Each note has an explicitly defined onset time and duration. The only external variables that affect a note are the digital audio signal’s sampling rate, the memory block size and performance tempo. If and how the sound event is audible is determined by the instrument definition. All parameters in instruments are explicitly defined and deterministic except when a random function (Mathews 1969: 128-129) is used to produce sound or to control a parameter of an instrument. Music V does not include a signifier for a “rest”. A rest is produced implicitly by not defining a note for a desired length of time.

The Music V score is a symbolic representation although it provides fairly explicit definition of sound events through notes and instrument definitions. For

Figure 3-1: An example score

example, Music V scores do not contain detailed descriptions of the sound syn-thesis algorithms needed when the score is translated into a sound file. There-fore, it is possible, that different implementations of Music V-compatible synthesis programs might produce slightly different-sounding translations of the same score.

The influence of Music V and its predecessors is demonstrated by the large amount of other unit-generator-based synthesis languages. Roads, for example, lists 20 synthesis languages, including Mathews’ Music III, IV and V (Roads 1996: 789-790). A modern and widely used member of this family is Csound by Barry Vercoe (Boulanger 2000).