Archive for the ‘audio-visual composing’ Category

Two Cultures of Film Music: Leitmotif and sound design

07/10/2014

Composing music and sound design for films poses specific challenges. The raison d’être for any music and sound design, in fact for the soundtrack as a whole is the narrative and the image track of the film. This poses constraints for the timebased art of music. Film music can never follow its own logic freely developing themes or sound textures as long as it takes. It is limited by the duration of a scene which it supports or comments.

In my analyses of films I observed that film composers adopt two fundamental musical approaches: on the one hand the thematic concept of music using Leitmotifs and harmonic tonality. On the other hand composers practice a timbral or spectral aesthetic which expresses itself through complex textures and drones that blend in seamlessly with environmental sounds and the dialogue. It is closely linked to sound design, which emerged from the electroacoustic music tradition and the 20th Century aesthetic of musicalising environmental, indeed any recorded sound or noise. I argue that leitmotivic sound design is a new, original creative tool specific to film. Film music by contrast is mostly extraneous, i.e. nondiegetic to the sound track and derived from the model of classical-romantic music, i.e. pastiche.

Examples

Films with leitmotivic sound design

Terminator 2 (James Cameron, 1991)

The Testament of Dr Mabuse (Fritz Lang, 1933)

The Hurtlocker (Kathryn Bigelow, 2008)

Katalin Varga (Peter Strickland, 2009)

The Woman in the Dunes (Sunna no Onna, Hiroshi Teshigahara, 1964)

 

Films with a leitmotivic music score

Jaws (Stephen Spielberg, 1975)

Halloween (John Carpenter, 1978)

Gone with the Wind (Victor Fleming, 1939)

King Kong (Cooper & Schoedsack, 1933)

The Third Man (Carol Reed, 1949)

Advertisements

The Rebirth of Music from the Spirit of Drama?

10/05/2014

Contemporary art music and audio-visual composing

Jean Martin, March 2014

Keywords: audio-visual composing, spectral music, sound recording, emancipation of noise/sound in the 20th Century

Sunset Blvd pool hell

Sunset Boulevard (1950, Billy Wilder) opening scene

I wanted to use this image as a metaphor – not for the death of music as we have known it, but for the state of floating transition it is in.

Much of music composed in history related to specific activities in the world: It supported religious practice and ceremony, dance, dramatic opera, funerals, military or stately events. Only occasionally music was purely self-referential as in the scholasticism of Renaissance music, or the experimentations of New music in the 20th Century.

The concept of the genius (mostly male) composer, the last one was arguably Stockhausen, is on its way out. A student recently called Stockhausen a dictator. This is an exaggeration, obviously but there is a core of truth to it. Do we want the meaning of sounds being dictated to us, sitting passively in a concert hall chair? Film is equally a medium that expects the listening viewer to suspend his disbelief (Coleridge, 1817) and submit to a passive state of perception. We are locked into a chair and a time line for roughly two hours.

The word “composer” now is often replaced by “artist”, even for creators who mostly work with sound. This indicates, that organisers of sound use other forms of expression as well: still and moving image projection, sound diffusion, technology in general and live performance. Ironically the more virtual the distribution of music via the internet has become the more live-performance has gained in importance. Live-performance means that the visual Mise-en-scène has to be carefully considered as well. If you want to get noticed you have to perform live. This is why Philip Glass and Steve Reich at the beginning of their careers both have formed their own ensembles solely dedicated to perform their own music.

Thinking about the audience, the listeners, reminds us that music is a social activity. The focus in the musical meaning making process has shifted towards the listener. The composer herself is a listener in the first place. Now the listener can become a composer. Audio technology has made this possible: the fact that any sound can be recorded means, that this recorded sound, this sound object has the potential to become the starting point for a new composition. This was made easier by samplers, digital tools, which can record and manipulate any sound. DJs adopted this approach and created new genres: turntablism, hip-hop and DJ culture.

Technology

Technology played an important part in shaping the music of the 20th Century. In fact it was the electrification of music. It started with radio in the 1920s, a new powerful medium to influence the masses and to bring music to many more listeners. Today we consume music mostly through loudspeakers. What does this mean as a phenomenon? Phenomenologically speaking it homogenises what we hear: dripping water, bird song, or a string quartet, is generated by the vibrations of the loudspeaker membranes, which in turn sets the air molecules into vibration. We will never get away from the fact that we listen through loudspeakers, as perfect as the reproduction might be. Technology unifies experiences into something repeatable, similar.

Artists question practices that are taken for granted, and use reproduction technologies in unconventional ways. Walter Ruttmann had the idea to use celluloid film for editing sound in his piece Weekend in 1930, only three years after the invention of film sound and long before sound editing in any other way was possible. One could record and mix on Shellac records, but one could not edit, i.e cut records. Cage used a turntable as an instrument in his piece Imaginary Landscape No. 1 (1939). In the mid-1990s DJs used turntables for scratching and mixing, an analogue and gestural, embodied variation of sampling culture called turntablism. Steve Reich had another take on this: in 1966 he was intrigued by a recording of a black man, who talked about having been beaten by police during a demonstration against racism. Reich looped a small segment of this recording with the words “come out to show them”. He then put two identical loops on two different analogue tape machines and started the loops at the same time. Slowly they move out of phase. In the process the meaning of the words gets blurred and it turns into pure rhythmic sound material. This is Come Out (1966) by Steve Reich. Minimalism, or to avoid the –ism, process music was born. His friend Terry Riley organised a similar, but different process in his famous piece In C a few years earlier in1960.

Experiments and Outsiders

The idea to use an experiment, a scientific method, in music is astonishing. Wikipedia defines an “experiment as an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis.” [4] Apart from exploring aspects of cause and effect, a central feature of an experiment is its repeatability. This intrusion of science into art tells us something about the status of science and technology in modern civilisations and their culture. Many composers enthusiastically embrace the use of science in music: Trevor Wishart, Joanthan Harvey, Pierre Boulez, Tristan Murail and countless more.

A theme seems to be emerging here: outsiders have shaped the music of the 20th Century. Luigi Russolo, an artist belonging to the Italian futurist movement, declared in his 1916 manifesto “The Art of Noises” that from now on noises will be emancipated and at the centre of composition. Ionisation, a piece for 13 percussionists composed 1931 by Edgar Varèse put this new aesthetic into practice. Varèse coined the term organised sound, after he was frequently attacked that his compositions were not music.

Pierre Schaeffer, a sound engineer working for Radio France in Paris in the 1940s, had the idea to create a new kind of music using purely recorded, concrete sound instead of abstract note symbols which then had to be interpreted and played by musicians. In 1948 he coined the term Musique concrète. This later evolved into electroacoustic music merging acoustic recording and electronic manipulation techniques. In an unexpected twist the techniques of the rather obscure practice of Electroacoustic music have gained great importance in the creation of complex soundtracks for contemporary films reaching millions of people. It is called sound design.

Helmut Lachenmann, a German contemporary composer, gave this aesthetic yet another twist. Lachenmann creates what he refers to as Musique concréte instrumentale: revealing sonic side effects of playing an instrument – scratching, breathing, hitting etc. and putting these noisy external, “non-musical” sounds at the centre of musical exploration.

Another outsider is Iannis Xenakis, an architect by training and self-taught composer. As a collaborator of the famous architect Le Corbusier he created the Philips Pavilion for the World Exposition 1958 in Brussels. Xenakis later used the architectural drawings and adapted them as a score for large orchestra in his piece Metastasis (1954).

Xenakis: Metastasis score (1954)

Technology and the electrification of music had other unforeseen consequences: it created or inspired whole new genres. Blues used to be a very local affair on cotton farms in the far south of the United States. It was basic improvised music that kept the workers going during the hard work. The fact that some of these musicians were recorded made it possible that many more people could listen to their music, in particular aspiring musicians who wanted to learn from the masters. One can argue that audio recording enabled and facilitated the emergence of new musical styles, like Blues, Jazz, Rock and Pop. Audio recording could capture the fine nuances of improvised music which otherwise couldn’t be notated.

Philips Pavillion, Brussels Expo 1958

Philips Pavillion, Brussels Expo 1958

Towards audio-visual composing

In the 1970s Boulez had the idea to create an institute that would firmly link science and the art of music. In 1977 IRCAM was founded and still exists today. Some of the fields of exploration are analysis and synthesis, musical perception and cognition, acoustics of instruments and rooms etc. This research inspired a compositional approach called spectral music: Jean-Claude Risset, Jonathan Harvey and in particular Tristan Murail, who analysed acoustic sound materials and used the analytical data to extracted pitches from environmental sound to create scores to be performed by conventional ensembles. There is a connection here to sound design and its use in films. Combining images and tone colours is a common practice in contemporary film. Sound colour, a metaphor itself, can create mood. A great example of this aesthetic approach is Toru Takemitsu’s sound organisation for the Japanese film The Woman in the Dunes (1964, Hiroshi Teshigahara), where his textural music blends in perfectly with environmental wind sounds to create an ominous atmosphere.

More research will be dedicated in future to these synesthetic phenomena. Neuroscientists are beginning to analyse brain functions during audio-visual perception, although this research operates still far below any aesthetically meaningful level.

Programmers or artists who are able to use code can now create any work which can be described or formulated precisely in symbolic ways, i.e. in code: new instruments, whole compositions, composing sounds and images simultaneously, interactive environments which use sensors to detect movement, distance, densities, basically any form of data, which then can be mapped to any other data. A software instrument can encapsulate a whole composition and each performance is just one possible rendition of a compositional idea. Scientific and mathematical ideas of chance, indeterminacy, stochastic processes, Fibonacci series or spectral analysis and composition have inspired composers.

Sound has become of great importance for practitioners in many artistic, scientific and technical disciplines: architects, artists, sculptors, scientists using sound for data sonification, film sound designers, interactive games designers etc. Sound has become a linking art.

Technology will continue to have a huge influence on cultural creations, including music. The computer, the universal machine, seems to reunite at least two of our senses in a way, that we naturally perceive the world simultaneously through our eyes and ears and the other senses. The division of the arts into specialised fields of listening, seeing, touching etc. seems outdated and not necessary anymore. But this poses huge challenges for creators. Truly universal artists with interests in various artistic, scientific and technological fields are required to create new works that appeal to a broad audience. But it is unlikely that a composer, writer and dramatist like Richard Wagner will emerge again. Instead, collaborations between imaginative people will become more important.

Here I would like to come back to my starting point and my special interest in film sound and the complex interaction between sound/music and moving images. My question is: can collaborative work in the context of film inspire new ways of composing and possibly reaching greater audiences?

Experimentation in music has been and still is an important aspect of composing and developing new ideas. The drawback of experimental music, if one can subsume all the different activities under one term, is that it attracts usually only very small audiences. This is not surprising because much experimental music remains on an exploratory stage investigating materials and processes. But ultimately any art form needs a real audience to thrive: a good mixture of ordinary people from various backgrounds. Art and music is an extended discourse which invites dialogue and responses from the consumer. It is an expression of thinking about life and how we experience the world with other means beyond language.

Composing music and sound design for films poses specific challenges. The raison d’etre for any music and sound design, in fact for the soundtrack as a whole is the narrative and the image track of the film. This poses constraints for the time-based art of music. Film music can never follow its own logic freely developing themes or sound textures as long as it takes. It is limited by the duration of a scene which it supports or comments. Constraints can be the best source of inspiration and creativity as one can beautifully observe in Lars von Trier’s collaborative film The Five Obstructions (2003 with Jørgen Leth).

In my analyses of films I observed that film composers adopt two fundamental musical approaches: on the one hand the thematic concept of music using Leitmotifs and harmonic tonality. On the other hand composers practice a timbral or spectral aesthetic which expresses itself through complex textures and drones. It is closely linked to sound design, which emerged from the electro-acoustic music tradition and the 20th Century aesthetic of musicalising environmental, indeed any recorded sound or noise.

Most music in commercial film is pastiche recycling existing aesthetic approaches and techniques. But digital technology has enabled new ways of audio-visual composing. For example audio-visual sampling is frequently used to express non-normal states of mind: drug-induced hallucinations, psychosis, dreams, the overstretching of the nervous system in war etc. Michel Gondry and Chris Cunningham have playfully and artistically pushed the boundaries of audio-visual expression. Darren Aronofsky in his film Requiem for a Dream (2000) or Lars von Trier in Dancer in the Dark (2000) with music by Björk and in a lead role as well have integrated these experimental techniques into main stream film culture. Some film sound theoreticians have argued that sound design is the new score.[5] The related practices of field recording, acoustic ecology and soundscape composition indicate a new interest, both commercial and artistic, in exploring and working with environmental sound in highly differentiated ways beyond the purely musical. This aesthetic approach acknowledges the emotional and evocative power of sound. Much research has been done how we actually listen and perceive sound.

New audio-visual experiments are happening outside the mainstream film industry. Music videos in particular used to be an area of immense creativity. Current computational practice has expanded this. With programmes like processing, SuperCollider, MAX/MSP and PureData (pd) truly audio-visual composing has become a possibility, at least on a technical level. What is missing is using these new techniques to address themes and questions of general human interest: the temporality and precariousness of existence, power, inequality, love, trust, struggle, death etc. On this level Richard Wagner is still dimensions ahead.

In his late film Playtime (1967) Jacques Tati plays with seemingly trivial, everyday situations of modern life, which at closer look and careful listening create dissonances in our audio-visual perception. These dissonances create effects that are comical. By his unusual interpretation of modern life Tati creates awareness of its absurdities revealing ideology, i.e. the blind belief in progress and constant movement which is not going anywhere (the roundabout as a metaphor for merry-go-round going nowhere).

[1] This process of disintegration, or positively, expansion of tonality started much earlier. A crucial moment was Wagner’s Tristan und Isolde, 1859.

[2] Cage: Experimental Music (1958: 10): “…one may give up the desire to control sound, clear his mind of music, and set about discovering means to let sounds be themselves rather than vehicles for man-made theories or expressions of human sentiments.

[3] Cage: The Future of Music: Credo, 1968 p. 5

[4] http://en.wikipedia.org/wiki/Experiment (viewed Mar 2014)

[5] Kulezic-Wilson , D. (2008). “Sound Design is the New Score.” Music and the Moving Image 2(2).