Introduction to Time-Domain Processing
SOUNDFILESThe CDP 'Groucho' suite of functions operate in the Time Domain, processing sound files (.wav or .aiff etc.). A wide variety of soundfile sample types and formats are supported (see Filetypes). By default, the output soundfile has the same format as the input, unless told to do otherwise, such as create a multi-channel file from a mono input. Soundfile conversion in CDP is handled by COPYSFX.
Sound editors graph the waveform of a sound file as instantaneous amplitude (loudness) against time:
The sound can be manipulated by altering those two parameters: amplitude and time. When amplitude is averaged over a longer time-window, it reveals the changing amplitude envelope of the sound. The CDP ENVEL and ENVNU function groups have just about every conceivable way of reshaping the envelope or imposing one on a sound, as well as fading and trimming functions which can affect the sound's attack and decay characteristics.
Transposition in the time domain (see MODIFY SPEED and STRANS) makes sounds faster (and therefore shorter) when the pitch level is raised, slower (and therefore longer) when lowered. Transposed sounds may also be stacked into chords, or the pitch may be distorted by modulation, especially Ring Modulation.
CUT, PASTE and MIX
The humble editing of sounds by cutting and splicing is in fact a very powerful technique, especially when combined with mixing. The SFEDIT function group has many ways of partitioning sounds, including those that MASK portions with silence or switch between sounds (TWIXT & SPHINX). The RETIME program shifts the time of events within a soundfile, by reference to amplitude peaks or events separated by silence, and can synchronise particular events in one soundfile with particular events in another. Several functions cut the sound into multiple output files, permitting re-assembly in a different fashion, or further processing before being re-mixed (for example PARTITION and ISOLATE).
On the pasting and mixing side, various sequencing functions (ranging from JOIN to SEQUENCE2) allow a number of different sounds to be played as events in a rhythmic pattern. For general mixing, the SUBMIX functions have a variety of approaches, the most flexible being MIX, which uses a text mixfile. It has a multichannel equivalent NEWMIX, supporting up to 16 channels.
An important aspect of mixing is spatialisation. Panning is most easily achieved by fixed spatial positioning in a mixfile, or time-variable movement via PAN (MODIFY SPACE 1). CDP has also developed two substantial groups of functions for manipulating sounds in multi-channel space: see the MULTICHANNEL group, especially MCHANPAN, and the MULTI-CHANNEL TOOLKIT, which supports ambisonic and WAVE_EX file formats. (Note that "true" stereophonic sound and, by extension, multi-channel "surround" sound, depend on very subtle timing and phase differences between the component signals, which are hard to simulate it may be better to work with spatially recorded sources.)
SEGMENTS and GRAINSSegmentation extends the idea of cutting and pasting and is an important feature of many CDP time-domain programs. In the EXTEND group, sounds are repeated (e.g. ITERATE) or the file is read in unusual ways (e.g. ZIGZAG). LOOP repeats segments while stepping through the soundfile, a concept developed more fully in BRASSAGE (as well the multi-sound SAUSAGE and the multi-channel WRAPPAGE). Here the segments, treated now as grains of sound, can be heavily layered in density to create an evolving texture and stretch or shrink the sound's timebase. Grainmill is a separate graphic program dealing specifically with BRASSAGE.
Brassage (Mode 5) can also be used to granulate a sound by creating gaps in it. The GRAIN functions manipulate grains in a "grainy" sound by transposing, shuffling, repositioning, reversing, duplicating them etc.
The IRCAM program CHANT, which synthesises the singing voice (example), repeats small enveloped grains called FOFs at a given density to create resonance. By contrast, CDP's PSOW functions try to find FOF-like pitch-synchronous grains in (ideally) vocal sounds and then manipulate them.
The TEXTURE programs repeat the input sound(s), in whole or in part, to create a texture. Each sound is treated as a 'note-event' (in the Texture workshop examples the input sound is typically a single note to reveal the treatment) which may be repeated at a fixed or varying time-interval, or in a defined rhythm, or in groups of events; transposed randomly within a given range or restricted to a defined pitch set; decorated like an musical ornament (DECORATED and ORNATE), or formed into fully-sequenced motifs. The repetitions may be scattered across stereo or multi-channel space, or spatialised in a more controlled manner. Almost all parameters are time-variable, allowing the texture to evolve.
The wide-ranging musical possibilities of the Texture set are summarised here, though beginners should start with TEXTURE SIMPLE. The Release 7 function PACKET, which extracts small enveloped wave-packets from a soundfile, has distinct potential in creating suitable input sounds for Texture programs.
ECHO and REVERB
The principle of repetition also applies to the echoing delay line and to reverberation. MODIFY REVECHO and NEWDELAY implement the former, while the REVERB group has programs to simulate classic reverberation techniques for larger and smaller spaces, plus a tapped delay line (TAPDELAY).
Convolution is an important reverberation technique, using a sampled impulse response of a building or other responsive space. Many suitable impulse response soundfiles are available on the internet. FASTCONV implements convolution via the Fast Fourier Transform (FFT); experimentation with "ordinary" sounds (not impulse response files) is also a possibility.
Filtering is standard time-domain technique for colouring sound, by reducing certain frequency bands, while boosting others to create resonance. The FILTER set has all the classic types of IIR filter, with further possibilities in the Spectral Domain (HILTE FILTER); FASTCONV can also be used as an FIR filter.
Especially useful are the filter bank programs, particularly VARIBANK, in which a text file controls a set of filters, tuned to a specific set of pitches, which are time-variable. This allows each frequency in the set to resonate to the extent that there is energy in that frequency band, and provides a means of harmonising unpitched material or reinforcing particular pitches (c.f. TUNEVARY in the spectral domain).
DISTORTIONThe DISTORT function group operates on (groups of) wavesets (psuedo-wavecycles), which are the set of samples between two sucessive zero crossings, but are not necessarily periodic. The output is highly dependent on the nature of the input, the number of cycles in the group and the DISTORT process chosen, ranging from highly distorted to relatively stable.
SYNTHESISBefore Release 7, CDP's SYNTH group was confined to a limited number of functions, such as generating simple test tones or basic waveforms. The advent of NEWSYNTH, an additive synthesis program, changes this dramatically, though the textfile which defines the waveform would be laborious to construct without the aid of a dedicated graphic interface. In the same vein, the Release 7 package (Feb. 2014) included some very recent and undocumented functions which suggest that this group will become very significant in the near future.
What has Groucho to do with the Time-Domain? Well, the key to developing the Spectral-Domain programs was the Phase Vocoder program (PVOC) developed by CARL. As the Time-Domain is the other side of the coin, so to speak, its programs were named "Groucho" at the very beginning of CDP.