This informal little essay grew out of some observations
I have made in my computer music
Melody is an often neglected, perfunctory or unsatisfactorily realized component in many electroacoustic compositions. Why? I do not believe that this is simply because melody is such a mainstay of most acoustic music and that composers prefer to "do something different" and to "explore other possibilities" in their computer-generated works. More fundamental, I think, is the simple fact that supple, expressive melodic phrasing is difficult to achieve with the hardware and software tools most readily available in electroacoustic music production.
Using either MIDI hardware or, with Music N synthesis languages, a note list preprocessor, it is easy enough to render a sequence of pitches in rhythm. But to phrase this melody with expressive, purposeful note-by-note inflections and nuances in intonation, articulation, loudness, color, brightness and other parameters, as a fine singer, violinist or saxophonist would do, fluidly, intuitively and "effortlessly" in "breathing life into a melody," is a much more complex task. The construction of adequate synthesis algorithms and the creation of sufficiently detailed performance data can become dauntingly complicated and tedious.
Currently available high end sample libraries with multiple velocity layers, alternative articulations and other enhancements, and sequencing programs that enable one to sequentially edit and "draw" data for multiple MIDI controllers attempt to address some of these problems. For my own purposes, however, I have found MIDI realization to be inadequate to this task. Often, continuous MIDI controllers such as modulation wheels and aftertouch are just too course, reducing a vast range of subtle inflection possibilities to 128 discrete, stepped values.
Additionally, one would need a great many such one-dimensional mechanical controllers -- more than could be manageably coordinated with two hands, two feet and a mouth -- to achieve convincingly supple performances of many types of melodic phrases. And editing continuous controller data one controller at a time in successive passes within a sequencing such as Cubase or Digital Performer not only is laborious, but more importantly makes it difficult to coordinate the composite effect and interaction of discrete parameters to produce desired nuances. Imagine making four or more editing passes on a note list to simulate the results of changes in pressure, velocity and placement on the string of a cello bow during the performance of a melodic phrase, or even of a single note. This is not a task I would undertake with exhilaration. Give me a cello bow, please.
There is nothing revelatory about the "problems with melody" noted above, of course. They constitute a key reason for the continued vitality of Music Minus One computer generated compositions, in which one or more vocalists or acoustic instrumentalists perform all of the more interesting melodic passages around which the computer part weaves a textural web.
Research into the development of more sophisticated types of computer-based musical instruments such as the MIT Media Lab's hyperinstruments, and of alternative performing interfaces (e.g. data gloves and other types of wearable controllers) offer the hope of more promising solutions in the future. However, most composers do not have access to such specialized hardware, nor to performers skilled in how to use these prototypical resources.
The film/musical composition Second Sight is a recent computer generated work in which I tried to address some of these issues in a manageable fashion with existing tools. I had only twelve days in which to complete this five and a half minute work. (To composers accustomed to Hollywood-type deadlines this might seem like an ample cushion. But I typically suffer angst at the inception of a new work, customarily getting off to a slow start, and thus had to really hoof it on this piece.)
As luck would have it many of the initial ideas that came to mind for this piece were short melodic segments with very particular types of inflections and phrasings. Within the work these phrases are "played" primarily by bagpipe-like, duduk-like and other aerophone-like synthesis algorithms. The process was accomplished entirely in software, principally with Csound and the Csound note list preprocessor Score11. The Csound instruments (synthesis algorithms) and companion score files used to create these melodic segments generally contain between fifty and ninety parameter fields (variables that can change in numerical value from one note to the next, or that can trigger time varying changes within a single note). These multitudinous parameters --- which might be likened to having dozens of modulation wheels and dozens of fingers to manipulate them simultaneously -- were mapped to various aspects of pitch and intonation, accentuation, articulation, timbre, brightness, loudness or amplitude, ambience and spatial localization.
Some of these parameters were controlled deterministically, by typing in a sequence of values. Other parameters employed "global" operations, changing in value over a specified duration (which might include several notes) according to curve or trajectory whose exact shape is user-definable. Other nuances were achieved by means of probability or pseudo-random procedures to introduce (or to automate the inclusion of) deviations or "surprises," or to create variations from one performance to the next (allowing me to select my "favorite" performances of the phrase).
The process of deconstructing a mentally imagined melodic phrase into interdependent sequences of numbers may seem far removed from picking up an instrument and playing some riffs. And indeed it can be an intellectually fatiguing, puzzle-solving procedure that at times seems more akin to transcribing music than to playing it. When I attempt to introduce melodic inflections the initial results often sound clumsy, mechanical, exaggerated or simply "unmusical." Correcting a single flaw, such as a maladroit-sounding portamento between two notes, may require modifying several parameters for both of the notes. Typically, while refining the performance of a melodic line I will work simultaneously on three or four such flaws, trying at all times to keep the problems manageable. On a good day some problems in melodic phrasing are solved fairly quickly. Others require many iterations of editing, rendering, listening, disappointment and evaluation to achieve (or at least to approach) a desired result. While it sounds banal, this process certainly does increase one's appreciation of the intricacy of melodic performance, and one's admiration for the melodic skills of vocal and instrumental artists.
On the whole I am happy with the melodic phrasing within many passages of Second Sight. The audio excerpts below, drawn from this work, illustrate some of the issues discussed above.
Audio examples from Second Sight
(1) The first melodic phrase in the work coincides with the visual title sequence that appears 12 seconds into the work. It consists of a melodic arc, played by a bagpipe-like synthesis algorithm, that includes just three sustained tones, albeit three notes that portend the melodic style of several ensuing passages and which introduce a defining sound of the composition. example 1 is my first, skeletal attempt at realizing this melodic phrase. example 2 presents the completed final version.
(2) Throughout the piece the bagpipe timbre become associated with a series of "companion" timbres that "answer," extend and develop related motivic ideas and elements of phrasing. In example 3 a hurdy gurdy mirrors the hairpin amplitude and brightness swells, the microtonal inflections and the "out of tune" intonational deviations of the bagpipes.
(3) example 4 presents a longer phrase (or a series of interlocking "phraselets"), played by a timbre mid way between that of a duduk and a saxophone. This example reflects the influence of, and my admiration for, historical jazz saxophonists with a particular gift for lyrical phrasing, such as Lester Young, Sonny Stitt, Johnny Hodges and Paul Desmond.
(4) example 5, presented here in a condensed form, develops, varies and extends some of the motivic material introduced in example 3 and includes overlapping phrases that might suggest duets, ensembles or a line that frequently splits into two or more parts.
Return to Allan Schindler's List of Writings
Copyright © 2005 Allan Schindler
Designed by Keum-Taek Jung