# 18 Randomness and Chance Composition

There is divinity in odd numbers, either in nativity, chance, or death.

—

Music composition, and artistic expression in general, often reflects
a tension between rational and irrational processes that underlie and
motivate creative work. Perhaps more than at any other time in
history, the 20th century witnessed the rational and irrational taking
explicit shape in the techniques that composers used to create
music. It should not be surprising that the same century that gave
birth to Quantum Mechanics and Heisenberg's Uncertainty Principle also
found artists consciously incorporating *non-deterministic*, or
random processes into their artistic pallettes. One of the most
important expressions of this influence was in the *chance
procedures* adopted by some artists. Whether it is motion added to
a dance by Merce Cunningham, an abstract painting by Jackson Pollock
or the I Ching music of John Cage, randomness and chance are
consciously embraced as generative techniques for making art. An
interest in chance procedures also served as an important motivation
in the development of the experimental music tradition in the 1950's,
from which computer composition stems. The earliest examples of
computer composition by Lejaren Hiller are largely based on some
chance procedures that will be discussed in the next two chapters.
these techniques can have a profound effect on the compositional
process yet are very simple to perform with a computer. The computer
also enables composers to experiment with many different types of
randomness that are more interesting than the simple “coin tossing”
model that ever composer is familiar with. It perhaps somewhat ironic
then, that the computer programs that composers usually use to create
chance compositions are not really generating randomness at all!

## Random Number Generators

Computers are designed to function in a totally consistent and
reproducible manner. This is a wonderful feature when your plane is
landing or your tax returns are prepared but it means that randomness
is inherently difficult to achieve with such a deterministic
machine. Although some applications use specialized hardware to
generate truly random signals, more typical computer programs only
simulate randomness with a special type of algorithm called a
*pseudo-random* number generator. Pseudo-random number
generators are software programs that calculate a series of numbers
with statistically random properties. But, as the name suggests, these
numbers are not really random at all. A pseudo-random generator is a
completely deterministic algorithm that generates one series of
numbers. A common design of many random generators uses past output
value as the *seed* for calculating the next number in the
series. This means that each time the generator starts from a
particular seed the exact same sequence of numbers in the series will
ensue. Many random number generators, including Lisp's `random`

function, allow the user to specify an initial random state seed that
positions the algorithm at a certain location in its sequence of
numbers.
Composers who want to express randomness “philosophically” in a
composition will therefore want to use different initial seeds each
time a chance program runs to insure the program generates a different
outcome each time. On the other hand, composers often want just the
opposite: to save seeds in order to “back up” in the event that a
random process has produced particularly satisfying results! This
latter method is one way to implement the common *generate and
test* compositional style in computer composition. By building
random choice into the algorithms that generate music the composer can
generate many possible "drafts" of a piece, each with its own random
variations. These versions can then compared to determine which one(s)
best satisfy the composer's goals. Sometimes a whole piece is
generated in this manner, other times the composer collects the best
features from different drafts and combines them into a final
version. Since the initial seed for each draft was saved the composer
can regenerate the best versions at any time, as long as the program
code remains exactly the same.

Since the material in this book assumes that the reader is
interactively experimenting with examples, we are virtually assured
that the book's path through Lisp's random number series will not be
repeated by the reader. For this reason none of the examples in this
book make use of random number seeds. But by establishing different
seeds each time you begin to work, or by saving the current seed
before you run a chance program, you can minimize (or maximize) the
likelihood that your program will generate the same results. To learn
more about the process of seeding in Lisp consult the documentation on
the `random`

function in the Language specification.

## Probability

A random process is a process in which outcomes have only a certain
likelihood of happening. *Probability* is a measurement of the
likelihood of these events actually occurring. Probability in our
daily lives is often described in terms of a “percentage
chance”. For example, the weatherman might say: “There is a 25%
chance that it will rain today”. The percentage indicates that
probability is *distributed* over all the events in a random
process, where each event, or range of events, has a certain
proportion of the total probability. So, although we do not know for
certain if it will rain or not, probability allows us to infer several
facts from the forecaster's statement:

- there is a 25% chance that it will rain today
- there is a 75% chance that it will not rain today
- there is a 100% chance that it will either rain or not rain today
- there is a 0% chance that neither will happen

Weather forecasts notwithstanding, probability factors are typically expressed as values between zero and one rather than as percentages between 0 and 100. The probabilities of all the events in a random process add up to 1.0 (100%) because there must be absolute certainty (1.0) that at least one of the outcomes will happen.

Some random processes are characterized by an equal distribution of
probability over all of the events in the process. This is called a
*uniform probability distribution*. For example, when a fair
die is rolled there is an equal likelihood that any one of the six
sides will land face up. Since the total probability must add up to 1
and there is an equal chance for any side to land on top, each of the
six sides has a probability factor of 1/6. For a given event `x`,
its probability, P(`x`), is the ratio of the number of ways the
outcome can happen to the total number of possible outcomes. For
example, if `x` is a randomly chosen key number on a piano then
P(`x`)= 1/88. If `x` is a randomly chosen black key on the piano,
then there are 36 possible ways to do this and so
P(`x`)=36/88=9/22. Similarly, the complementary probability of an
event (all the events that are not `x`) is 1-P(x). So the
probability of selecting a white key `x` on the piano can be
calculated by complementing the probability of black keys, or P(x)=
1-9/22=13/22.
In a continuous distribution, probability is defined over an area
under a distribution curve. This means that the probability of any
single event, or “point” is zero since it has no area. This sounds
odd but since probability is the ratio of the number of ways an event
can happen to the total number of events then if there are an infinite
number of events the probability of any single event is zero.

Lisp's `random`

function generates numbers in both discrete and
continuous distributions. By specifying an integer range to `random`

only integers will be returned. If the range is expressed as a real,
or floating point value, then continuous values within the limits of
floating point precision are returned. The `random`

function always
returns a value between zero less than the bounds specified to the
function. To generate any other range of values, or to randomly
generate non-numeric values, requires that the output from `random`

be scaled and offset, or used as indices into lists of non-numeric
data. These kinds of manipulations are such a common occurrence in
composition that Common Music provides a number of different functions
to facilitate chance operations in musical contexts. These functions
are listed together here and then discussed at appropriate places
later in this chapter.

- [Function]
- (
`between`

`low``high`[`skip`]) - [Function]
- (
`odds`

`prob`[`then`] [`else`]) - [Function]
- (
`pick`

`...`) - [Function]
- (
`pickl`

`list`[`:start`

`i`] [`:end`

`i`] [`:skip`

`x`]) - [Function]
- (
`ran`

[`:type`

`k`] [`:a`

`n`] [`:b`

`n`]) - [Function]
- (
`shuffle`

`list`[`:start`

`i`] [`:end`

`i`]) - [Function]
- (
`tendency`

`n``env1``env2`[`:offset`

`n`] [`:scale`

`n`]) - [Function]
- (
`vary`

`n``pct`[`where`])

## Probability Distributions

The probability of events in a random process is determined by a
*probability distribution* — a mathematical function that
describes the likelihood of all possible outcomes. Distributions can
be characterized by a *mean* or average, value that they
produce and by a characteristic “shape” that the probability
expresses over the events in the random process. One of the great
advantages to using a computer to work with random processes is that
it is possible for a composer to employ complex distributions without
needing to understand the underlying mathematics of their
implementation (which can be quite complex for some types of
randomness). Composers who are interested in a formal presentation of
random number generation should read Knuth's
Seminumerical Algorithms, volume 2 of The Art of Computer
Programming (Knuth) , which contains the definitions of many random
number generators with different probability characteristics.
Computer Music (Dodge) by Charles Dodge and
Thomas Jerse also contains an excellent discussion of
probability theory and random number distributions.

In the next few sections we define the characteristics of some common
distributions. Each example includes a graphical plot of sampled
events together with a *histogram* that depicts a
characteristic density of events that each probability distribution
generates. The histograms were generated by analyzing the output from
a musical process that maps random values onto a range of integer MIDI
key numbers. The musical processes that generated the results are
shown in each example. Such a process can be listened to in
“discrete” mode — quantized to the closest MIDI key number
— or in “continuous” mode, by using the channel tuning methods
discussed in Chapter 14 to output microtonal frequency values. Example 18-1
contains the basic process definition we will use to sonify the
distributions. The sound example provided for each distribution plays
the distribution in continuous mode.

Example 18-1. Mapping distributions onto frequency space.

(define (play-ran type len rate key1 key2) (process repeat len for r = (ran :type type) for k = (rescale r 0.0 1.0 key1 key2) output (new midi :time (now) :keynum k :duration (* rate 1.5) :amplitude .5 ) wait rate)) ;;; channel tuning on first 9 channels (define ct9 (list true 0 8)) (define (mean-ran type n a b) (/ (loop repeat n sum (ran :type type :a a :b b)) n))

The `play-ran`

function in Example 18-1maps values from Common Music's
`ran`

function onto a range of key numbers specified to the
process. The `ran`

function generates random numbers in a
distribution specified using its `:type`

keyword argument. The
following distribution types are supported:

`:uniform :low :high :triangular :beta :exponential :cauchy :poisson :gamma :gaussian`

If the `:type`

argument is not specified then the uniform
distribution is generated. See the documentation on `ran`

in the
Common Music Dictionary for more information about this function.

The `mean-ran`

function calculates the *mean*, or average,
value of `n` outcomes of `ran`

. This gives us some measure of how
well the pseudo-random number generator approximates the theoretical
mean value of each distribution type.

### Uniform distribution

In a *uniform distribution* all events have an equal likelihood
of being selected. This means that in a discrete selection over `n`
events, each event has `1/n` probability of being selected. In a
continuous distribution the events under two equal portions of the
total probability range will have an equal likelihood of selection
with a mean value of .5. Lisp's `random`

function return numbers in
a uniform distribution. These values can then be filtered or processed
in different ways to produce non-uniform behavior.

Figure 18-1. Two plots depicting values (points) and histogram (bars) of uniform distribution. The Y axis plots continuous random values normalized 0-1. The x axis plots the histogram range defined over 127 key numbers.

Interaction 18-1. Playing continuous uniform distribution.

cm> (events (play-ran :uniform 100 .1 20 100) "uniform.mid" :channel-tuning ct9) "uniform-1.mid" cm>

### Linear "Low-pass" Distribution

In a linear “low-pass” distribution the probability of larger numbers decreases linearly over the range of values returned by the process. The low-pass behavior is implemented by filtering a uniform process such that the minimum of two uniformly chosen numbers is returned:

`(min (random 1.0) (random 1.0))`

The expected mean value of the low-pass distribution is .2929.

Figure 18-2. Low pass values (points) and histogram (bars).

Interaction 18-2. Playing the low pass distribution.

cm> (events (play-ran :low 50 .1 20 100) "low.mid" :channel-tuning ct9) "low-1.mid" cm>

### Linear "High-pass" Distribution

In a linear “high-pass” distribution the probability of larger numbers increases linearly over the range of values. The distribution can be implemented by returning the maximum of two uniform random numbers:

`(max (random 1.0) (random 1.0))`

The expected mean value of the high distribution is .2929.

Figure 18-3. High pass values (points) and histogram (bars).

Interaction 18-3. Playing the high pass distribution.

cm> (events (play-ran :high 50 .1 20 100) "high.mid" :channel-tuning ct9) "high-1.mid" cm>

### Mean Distribution

In a mean distribution the probability of numbers in the middle of the distribution increases linearly. A mean distribution can be implemented by averaging two uniform random numbers:

`(/ (+ (random 1.0) (random 1.0)) 2))`

The expected mean value of the distribution is .5.

Figure 18-4. Triangular distribution values (points) and histogram (red bars).

Interaction 18-4. Playing the triangular distribution.

cm> (events (play-ran :triangular 50 .1 20 100) "tri.mid" :channel-tuning ct9) "tri-1.mid" cm>

### Exponential Distribution

The exponential distribution returns the natural logarithm of a uniformly chosen number.

`(/ (- (log (- 1 (random 1.0)))) a))`

The `a` parameter provides a “stretching factor” that scales
values closer or further from 0; the larger the value of `a` the
more the values are pushed toward zero. If random returns 0 then
log_{e}1 is 0, the minimum value in the distribution. As
random values exceed `1/e` (0.367) the values returned in the
distribution exceed 1.0. There is no upper limit to values produced in
the exponential distribution, but 99% of the time its value is less
than 6.908.

Figure 18-5.
Exponential distribution values (points) and histogram (red
bars) for *a=1* and *a=2*.

The expected mean value of the exponential distribution for `a=1` is
.693.

Example 18-2. An exponential distribution.

(define (play-exp len rate key1 key2 a) (process repeat len for r = (ran :type :exponential :a a) for k = (rescale r 0.0 6.908 key1 key2) when (< k 128) output (new midi :time (now) :keynum k :duration (* rate 1.5) :amplitude .5 ) and wait rate))

The `play-exp`

process passes the `a` stretching factor to the
`ran`

function. Since the distribution is unbounded the conditional
clause insures that only valid MIDI key numbers will be output.

Interaction 18-5. Playing the exponential distribution for a=1.

cm> (events (play-exp 50 .1 20 100 1) "exp.mid" :channel-tuning ct9) "exp-1.mid" cm>

### Gaussian Distribution

The Gaussian, or normal, distribution returns values that fall under
the normal curve. The spread, or *standard deviation* is 1.0
centered at 0, so 68.26% of the results are between -1 and 1, and
99.74% of the results are between -3 and 3.

Figure 18-6. Gaussian distribution values (points) and histogram (bars).

Example 18-3. A Gaussian distribution.

(define (play-gauss len rate key1 key2) (process repeat len for r = (ran :type :gaussian) for k = (rescale r 0.0 3 key1 key2) when (<= 0 k 127) output (new midi :time (now) :keynum k :duration (* rate 1.5) :amplitude .5 ) and wait rate))

Interaction 18-6. Playing Gaussian distribution.

cm> (events (play-gauss 50 .1 20 100) "gauss.mid" :channel-tuning ct9) "gauss-1.mid" cm>

### Beta Distribution

The Beta distribution is one of the most musically interesting
distributions to work with because it really describes a whole family
of distribution shapes. The beta distribution is controlled by two
parameters `a` and `b`. When `a=b=1` then a uniform
distribution results. When `a=b`, the distribution is symmetric
around .5. When `a<1` and `b<1` the density of large and
small numbers increases. When `a>1` and `b>1` density is
similar to the Gaussian distribution.

Example 18-4. The beta distribution.

(define (play-beta len rate a b key1 key2) (process repeat len for r = (ran :type :beta :a a :b b) for k = (rescale r 0.0 1.0 key1 key2) output (new midi :time (now) :keynum k :duration (* rate 1.5) :amplitude .5 ) wait rate))

Figure 18-7. Beta distribution values (points) and histogram (bars).

Interaction 18-7. Playing the beta distribution for a=b=.4.

cm> (events (play-beta 50 20 90 .1 .4 .4 ) "beta.mid" :channel-tuning ct9) "beta-1.mid" cm>

## Random Processes in Music Composition

Both continuous and discrete random processes are used in computer composition to control the evolution of parameterized data. But — as may be noticed in the preceding examples — simply mapping random distributions directly onto sound parameters does not really translate into interesting musical results. The lack of correlation between events in a random process causes our interest to diminish rapidly as we listen. There are really only two ways to deal with this issue. One method is to reserve the use of randomness for aspects in a composition that do not affect its structural integrity. The other method is to control randomness in some manner such that the listener either perceives an underlying structure unfolding or at least has some expectation of regularity, or correlation in the pattern. The remaining sections of the chapter present different approaches to controlling random processes such that they generate more interesting musical results than the basic distributions on which they are based.

## Sampling Without Replacement

Generating discrete random values using an expression like
`(random 88)` is called *sampling*. Sampling can be
performed two different ways. *Sampling with replacement* means
that once a value has been sampled it is still available to be sampled
again. For example, if key number 60 is generated by `(random
88)` then 60 can be still be selected as a possible next
value. *Sampling without replacement* describes a process in
which generated values are not returned to the population once they
have been selected. This is similar to how cards are dealt from a
shuffled deck. By using sampling without replacement, a composer
randomly *reorders* a set of specific values whose
relationships are otherwise fixed. This reordering can be perceived as
a kind of “freshness” in the data. The function `shuffle`

can be
used to implement sampling without replacement. Shuffle randomly
permutes a list of values, which can then be sampled one at a time to
produce a random ordering without repetition. Since shuffle operates
on list positions rather than values, the list can hold any type of
data.

Example 18-5. Sampling without replacement.

(define (sampling reps rate notes ) (let ((len (length notes))) (process for i below reps for j = (mod i len) when (= j 0) ; need to reshuffle? set notes = (shuffle notes) output (new midi :time (now) :keynum (list-ref notes j) :duration (* rate 1.5) :amplitude (interp j 0 .4 (- len 1) .8)) wait rate)))

Our `sampling`

process also produces a crescendo that starts over
again each time the values are shuffled so that each “period” in the
population can be easily heard. Note that while shuffling avoids
direct repetition within the period, there may still be a direct
repetition between the last note of a period and the first note of the
next.

Interaction 18-8. Calling shuffle.

cm> (events (sampling 80 .1 '(c d ef f g a bf c5)) "shuffle.mid") "shuffle-1.mid" cm>

Direct reselection can be an issue anytime discrete random selection
is used. When the range of random values is sufficiently large direct
repetition may be so infrequent that it does not have a noticeable
effect on the results. But as the number of events in a discrete
process diminishes the likelihood of direct reselection increases. At
some point this may cause an unwanted regularity in musical
results. One way to deal with this is to use the optional `omit`
argument to the `between`

function to avoid direct reselection. An
`omit` value will not be returned by `between`

. By specifying the
last output value as the `omit` value for the next random number,
direct reselection will not occur. The following process can be used
experiment with the effects of allowing or inhibiting direct
reselection.

Example 18-6. Avoiding direct repetition in discrete distributions.

(define (reps len lb ub skip?) (process repeat len for l = (if skip? k false) for k = (between lb ub l) output (new midi time (now) :keynum k :duration .2) wait .125))

If `skip?` is true then the variable `l` is set to `k`, the last
value returned by `between`

. In this case `l` is not false, so
when it passed as the omit value to `between`

direct repetition of
`l` cannot occur. If `skip?` is false then `l` will be false no
matter what the value of `k`is and direct selection is then possible.

Interaction 18-9. Random selection with and without direct repetition.

cm> (events (reps 30 60 63 false) "reps.mid") "reps-1.mid" cm> (events (reps 30 60 63 true) "reps.mid") "reps-2.mid" cm>

- → reps-1.mid
- → reps-2.mid

### Variance

Continuous selection is commonly used to provide variance in scaling and offsetting procedures. For example, by scaling a regular tempo or pulse value by a small random percentage the actual rate of events will fluctuate in time and produce a more natural effect than a fixed rate would. If the percentage variance is small enough the fluctuations will not affect the perception of an underlying structural pulse. The process definition in Example 18-7 can be used to experiment with random percentage variance applied to the pulse, amplitude and duration of events.

Example 18-7. The tapping process.

(define (tapping reps pct rate key amp) (let ((half (/ reps 2))) (process for i below reps for v = (if (< i half) 0 pct) output (new midi :time (now) :duration (vary .1 v ) :keynum key :amplitude (vary amp v :below)) wait (vary rate v))))

The `tapping`

process plays a single `key` number `reps` times
at a specified `rate` and amplitude. The events in the first half of
the process are played with fixed values and in the second half the
values are allowed to vary by up to `pct` amount of their values.
By default the function `vary`

distributes variance equally on
either side of the fixed value. To produce variance less than the
value specify `:below`

as the third argument to `vary`

. Similarly,
the keyword `:above`

will offset the variance above the value.

Interaction 18-10. Comparing 10% variance with 50% variance.

cm> (events (tapping 50 .1 .2 72 .6) "tap.mid") "tap-1.mid" cm> (events (tapping 50 .5 .2 72 .6) "tap.mid") "tap-2.mid" cm>

Even something as simple as random variation of a beat can be made
musically interesting if it is controlled in some manner. In the next
example the amount of variation applied to the beat is controlled by
an envelope. When the envelope is at 0 then no variation occurs, when
the envelope is at 1 then full variance is expressed. By using an
envelope to control random variance the process can interpolate over
the full continuum between completely random time increments and a
strictly measured pulse. The same envelope that controls the beat
will also be used to provide a value for the `a` and `b` beta
distribution parameters that generates amplitude values for the
process. When the control envelope is at zero (no random variance)
then the beta distribution will be `a=b=.4` and produce a large
percentage of loud and soft values (Figure 18-7). When the envelope is at
1 then the beta distribution will be `a=b=1` and produce values in
the uniform distribution. When the control envelope is at zero (no
random variance) then the beta distribution will be at (beta
`a=b=1`) and produce values in the uniform distribution. When the
envelope is at 1 (full random variance) then `a=b=.4`) and a large
percentage of loud and soft values are produced. The shape of the
envelope defined in Example 18-8 causes the process to begin and end with
random time values and express the strict pulse in the middle.

Example 18-8. The rain process.

(define (rain len rate env key1 key2 amp) (process for i below len for e = (interpl (/ i len) env) for v = (vary rate e :above) ;; rescale e to control shape of beta distribution ;; when e=0 z=.4, and when e=1 z=1. for z = (rescale e 0 1 1 .4) for b = (ran :type :beta :a z :b z) output (new midi :time (+ (now) v) :duration v :keynum (between key1 key2) :amplitude (rescale b 0 1 .1 amp)) wait rate)) (define rain-env '(0 1 .4 0 .6 0 1 1))

We listen to three simultaneous versions of the `rain`

process, each
with different length beats.

Interaction 18-11. Listening to rain.

cm> (events (list (rain 80 .5 rain-env 70 90 .8) (rain 40 1 rain-env 40 60 .8) (rain 20 2 rain-env 20 40 .9)) "rain.mid") "rain-1.mid" cm>

### Shape-based composition

By applying a random variance to envelope values we can generate random variations on the basic shape of the envelope:

Example 18-9. Adding a random variance to envelope values.

(define (shaper len rate env key1 key2) (process for i below len for e = (interpl (/ i len) env ) for k = (rescale e 0 1 key1 key2) for r = (between -5 5) output (new midi :time (now) :keynum (+ k r) :duration rate) wait (* rate (pick .5 .25 1)))) (define shape-env '(0 0 .2 1 .4 .25 .6 .5 .8 0 1 .5))

The process in Example 18-9 scales and offsets values from an envelope to generate a range of key numbers specified to the process. A random amount is then added to the the key number to shift it upwards or downwards.

Interaction 18-12. Listening to two variations of the same envelope.

cm> (events (list (shaper 40 1 shape-env 70 90) (shaper 40 1 shape-env 60 80) (shaper 20 2 shape-env 60 48)) "shape.mid") "shape-1.mid" cm> (events (list (shaper 40 1 shape-env 70 90) (shaper 40 1 shape-env 60 80) (shaper 20 2 shape-env 60 48)) "shape.mid") "shape-2.mid" cm>

#### Tendency Masks

A *tendency mask* is a technique related to the shape-based
approach first described by the composer Gottfried Michael Koenig. A
tendency mask is created using two envelopes to specify the lower and
upper boundaries for a random selection. By describing each boundary
as an envelope the range of the random variable can vary as a function
of some dynamic process. Common Music's `tendency`

function returns
a random value given an `x` lookup value, and a lower and upper
envelope that define the boundaries.

Example 18-10. Tendency masking.

(define low-mask '(0 .4 .75 .4 1 0)) (define top-mask '(0 1 .25 .6 1 .6)) (define (masker len rate lower upper key1 key2) (let ((last (- len 1))) (process for i to last for e = (tendency (/ i last) lower upper) for k = (rescale e 0 1 key1 key2) output (new midi :time (now) :duration rate :keynum k) wait rate)))

In this example two envelopes `lower` and `upper` control the
range of random selection. The variable `e` is set to a random value
between an interpolated lower and upper bounds in the two envelopes
specified to `tendency`

This random value, which lies between zero
and one, is then rescaled to lie between the key numbers `key1` and
`key2` passed as arguments to the process.

Interaction 18-13. Listening to masker.

cm> (events (masker 80 .1 low-mask top-mask 20 110) "masker.mid") "masker-1.mid" cm>

## Weighted Random Selection

In *weighted random selection* individual probability weights
are assigned to each event in a discrete random process. This affords
more precise control over the overall mix of outcomes than would
otherwise be possible using one of the distributions already discussed
in this chapter. Common Music's `odds`

function implements a simple
binary form of weighted random selection that is identical to the
`chance?`

function from Chapter 7 (except that
values other than true and false can be returned by `odds`

). More
complex forms of weighted selection are presented in Chapter 19 and Chapter 20, but these
discussions assume an understanding of probability tables, which we
present here.

### Probability Tables

A probability table associates each outcome in a discrete random
process with a specific portion, or segment, of the total probability
range. The size of each segment reflects the proportional likelihood
of that value being generated from the random process. To generate the
next value from a table, a uniformly distributed random value is
generated and the table is searched to find the probability segment
that contains the random lookup value. The datum associated with that
segment is then returned from the table as the random value.
The implementation defined in Example 18-11 allows us to specify
probabilities in terms of relative *weights* that are
associated with each possible outcome in the random process. The
function that creates the probability table will convert these
relative weights into absolute portions of the total probability
interval 0 to 1. To see how this works, consider a weighted random
process that generates four rhythmic symbols: S, E, Q and H where S
should be three times as likely Q, E should be twice as likely as Q
and H should 1/4 as likely as Q (Table 1).

Table 18-1. Example probability table for four rhythmic outcomes.

index | 0 | 1 | 2 | 3 |
---|---|---|---|---|

outcome | s | e | q | h |

weight | 3 | 2 | 1 | 1/4 |

segment | 0.0-0.48 | 0.048-0.8 | 0.8-0.96 | 0.96-1.0 |

Our implementation of probability tables (ptables) in Example 18-11 uses two
functions. The first function `make-ptable`

creates a ptable from a
list of outcomes and weights. The `pran`

function then generates a
random outcome from a ptable created by `make-ptable`

.

Example 18-11. Implementing probability tables.

(define (make-ptable data) (let ((total (loop for d in data sum (second d))) (sum 0)) ;; total holds sum of weights in data (loop for d in data for v = (first d) ; outcome to return for w = (second d) ; relative weight do (set! sum (+ sum w)) ;; collect outcome and normalized probability collect (list v (/ sum total))))) (define (pran table) ;; return outcome in table according ;; to its weighted probability (let ((x (random 1.0))) ;; x is uniform number < 1. (loop for d in table for p = (second d) when (< x p ) ; x in this segment. return (first d))))

A probability table is represented as a list of lists, the first element of each sub-list is an outcome and the second value is its weight relative to the other outcomes defined in the list. For example, the probability table described in Table 1 would be specified as the list:

`((s 3) (e 2) (q 1) (h 1/4))`

The `make-ptable`

function creates a probability table from
`data`, a list of values and weights as shown in the example
above. The table returned by `make-ptable`

is identical to `data`
except that the relative probability weights in the data description
are converted to equivalent absolute segments in along the probability
interval 0-1. The variable `total` is set to the sum of the
relative weights in `data`. The loop then iterates over each
sublist `d` in `data` and sets the variable `v` to the outcome
of `d` and sets `w` to its weight. The loop then collects lists
containing the same value `v` but with a weight normalized to lie
between 0-1 by calculating the current ratio of `sum` to the
`total` weight. The table returned by the function contains the same
outcomes as `data` but with relative weights converted to
monotonically increasing values that represents the upper bounds of
each segment along the probability range 0-1.

The function `pran`

implements weighted random selection given a
probability table. It generates a uniform random number `x` between
0 and 1. The larger segments in the table will contain proportionally
more chances of containing values of `x` than the smaller segments.
The loop iterates over each element in the table and compares `x` to
the probability segment value `p` of each segment. Since segment
values are stored in increasing order, the first occurrence of `x`
less than `p` is the probability segment that contains the value to
return.

Given our implementation of probability tables we can now define a process to generate values using weighted random selection:

Example 18-12. Weighted random selection of notes and rhythms.

(define note-tabl (make-ptable '((c 3) (d 1) (ef 2) (f 1) (g 3) (af 1) (bf 1) (c5 3) ))) (define rhy-tabl (make-ptable '((s 3) (e 2) (q 1) (h 1/4)))) (define (play-pran reps notes rhys tempo) (process repeat reps for k = (keynum (pran notes)) for r = (rhythm (pran rhys) tempo) output (new midi :time (now) :keynum k :duration (* r 1.5)) wait r))

The probability table defined in `notes` “prefers” notes of a
C-minor triad in the C-minor scale. The table stored in `rhys`
contains the example shown in Table 1.

Interaction 18-14. Listening to Pran

```
cm> (events (play-pran 50 note-tabl rhy-tabl 100) "ptab.mid")
"ptab-1.mid"
cm>
```

### Random Walks

The term *random walk* describes a random process in which a
value is *incremented* by a random amount. The motion produced
by a random walk is reminiscent of how a drunk staggers around when
walking: with each stagger the location of the drunk changes randomly
within the radius of one step. The random walk is also characteristic
of how molecules and very small particles move (Brownian motion) and
for that reason it is sometimes referred to as *brown noise*.

Because the next value in a random walk must lie within a certain
interval from the current value, the current value has a very strong
influence on the next outcome. The influence of past values on
successive values is called *correlation*. The correlation in a
random walk is characterized by a 1/f^{2} spectrum, which
means that small rates of change are much more present than larger
ones. This is a very different behavior than the uniform distribution,
or *white noise*, which exhibits large changes just as
frequently as small ones because there is no correlation between
successive values.

A random walk is easy to implement in a musical algorithm. A walking
value `x` is initialized to some appropriate starting value and then
incremented by a random amount (positive or negative) on each iteration
of the process. A moment's reflection will tell us that at some point
an *unbounded* walk will likely exceed the limits of whatever
sound parameter it is controlling. For this reason the random walks
that control parameterized data are typically *bounded*, or
constrained, in some manner so that they always lie within minimum and
maximum boundary values. There are no hard and fast rules on how to
adjust values that wander past boundaries: common adjustments
include resetting value to the boundary value, reflecting the value
back into bounds, randomly repositioning the value somewhere within
the range, or stopping the process altogether.

Common Music's `drunk`

function implements bounded or unbounded
random walks and its optional keyword arguments constrain values to
lie within specified `:low`

and `:high`

boundaries. A value that
exceeds either bounds is adjusted according to the `:mode`

argument. Possible modes are:

`:reflect :limit :reset :jump :stop`

The default mode is `:reflect`

, which “wraps” a value back into
bounds by its current excess amount. Example 18-13 defines a process that
uses `drunk`

to generate a random melody.

Example 18-13. Random walks applied to keynum and amplitude.

(define (stagger len low high step) (let ((k (/ (- high low) 2)) (a (between .1 .9))) (process repeat len for r = (pick 1/8 1/4) set k = (drunk k step :high .9 :mode :jump) output (new midi :time (now) :duration (* r 1.5) :keynum k :amplitude a) wait r)))

`K` is a key number controlled by a random walk between `key1`
`key2`. On each iteration of the process the key number can deviate
by up to `step` steps from its current position. The variable `a`
is the amplitude of each event, which is allowed to wander between .1
and .9. The `:mode`

keyword argument controls the boundary rule of
the walk. The value `:jump`

causes a value that is out of bounds to
be reset to a randomly chosen position within bounds.

Interaction 18-15. Listening to the stagger process.

cm> (events (stagger 50 40 80 3) "drunk.mid") "drunk-1.mid" cm> (events (stagger 50 20 90 7) "drunk.mid") "drunk-2.mid" cm>

## Example

The final example in this chapter was provided by Michael Klingbeil from some studies he made for an orchestra piece. The example generates a series of exponential gestures, each of which consists of an upward “arc” of microtonal notes. This short example uses many of the functions presented in this chapter as well as the exponential scaling functions from Chapter 16 and the harmonic series from Chapter 15.

Example 18-14. Using randomness in scaling and tuning.

(define (arpeggiate-exprhy keynums time rate midpoint-frac amplow amphi legato bass-legato bass-cutoff last-legato) (let* ((segs (length keynums)) (last (1- segs)) (midpoint (floor (* segs midpoint-frac))) ;; deltas below midpoint follow one curve, above another. (deltas (append (explsegs midpoint (* midpoint-frac time) rate) (explsegs (- segs midpoint) (* (- 1 midpoint-frac) time) (/ 1 rate))))) (process for i from 0 for k in keynums for d in deltas for r = (if (< k bass-cutoff) bass-legato (if (= i last) (* last-legato d) (* legato d))) for a = (between 0.45 0.5) then (drunk a 0.1 :low amplow :high amphi ) output (new midi :time (now) :keynum k :amplitude a :duration r) wait d)))

The `argeggiate-exprhy`

process creates both upward and downward
“arcs” of MIDI events from a specified set of key numbers. The
rhythms of each arc are determined by dividing a total time duration
into two segments of exponentially related rhythms. The
`midpoint-frac` is a proportion that determines the duration of each
are relative to the total `time` specified to the process. This
duration is then “chopped up” into exponentially related time values
that sum to the duration of the arc. The amplitude `a` for the each
event is determined by a random walk. The `amplow` and `amphi`
parameters control the overall range of the random
walk. `Bass-legato` sets the duration of notes with keynum less than
`bass-cutoff`. This allows the bass notes to ring even when the
synthesizer does voice stealing for the middle notes of the
arpeggio. `Last-legato` extends the duration of the final note of an
arpeggio for similar reasons.

The definition of `argeggiate-exprhy`

produces a single upward and
downward arpeggiation. We now define another process that schedules a
series of overlapping arpeggiations whose key numbers are determined
by distorting the harmonic series:

Example 18-15. Arpeggiated harmonic distortions.

(define (distort-harmonics fund distort) (loop for h from 1 below (floor (/ 25.0 distort)) if (odds (* 0.9 distort)) collect (keynum (* fund (expt h distort)) :hertz))) (define (arpa-harmonic note dur gap) (process with fund = (hertz note) for distort from 0.7 below 1.05 by 0.05 for notes = (distort-harmonics fund distort) sprout (arpeggiate-exprhy notes (* (vary dur 0.1) distort) (between 4.0 0.25) (between 0.3 0.6) 0.3 ; amplow 0.8 ; amphi (* dur distort 0.7) ; bass-legato 2.0 ; legato 59 ; bass cutoff 1.0) ; last-legato wait (vary gap 0.4)))

`Arpa-harmonic`

spawns overlapping arpeggios with maximum mean
duration of `dur` and a mean `gap` between arpeggio starts of
`gap` seconds. Each arpeggio is composed of frequencies of a
distorted harmonic series. The distortion factor varies from 0.7
(compressed spectrum) to 1.0 (harmonic spectrum) by increments of
0.05. The greater the compression, the lesser the odds of selecting an
element of the spectrum. When the distortion reaches 1.0 the spectrum
is harmonic and on average 90% of the elements will be selected.

Interaction 18-16. Arpeggiated harmonic distortions.

cm> (events (arpa-harmonic 'g1 7.0 5.0) "arp.mid" :channel-tuning ct9) "arp-1.mid" cm> (events (arpa-harmonic 'g1 6.0 5.0) "arp.mid" :channel-tuning ct9) "arp-2.mid" cm>

## Chapter Source Code

The source code to all of the examples and interactions in this chapter can be found in the file chance.cm located in the same directory as the HTML file for this chapter. The source file can be edited in a text editor or evaluated inside the Common Music application.

## References

1981). The Art of Computer Programming. Reading: Addison-Wesley.

(1985). Computer Music. New York: Schirmer Books.

(