Two Cultures of Film Music: Leitmotif and sound design


Composing music and sound design for films poses specific challenges. The raison d’être for any music and sound design, in fact for the soundtrack as a whole is the narrative and the image track of the film. This poses constraints for the timebased art of music. Film music can never follow its own logic freely developing themes or sound textures as long as it takes. It is limited by the duration of a scene which it supports or comments.

In my analyses of films I observed that film composers adopt two fundamental musical approaches: on the one hand the thematic concept of music using Leitmotifs and harmonic tonality. On the other hand composers practice a timbral or spectral aesthetic which expresses itself through complex textures and drones that blend in seamlessly with environmental sounds and the dialogue. It is closely linked to sound design, which emerged from the electroacoustic music tradition and the 20th Century aesthetic of musicalising environmental, indeed any recorded sound or noise. I argue that leitmotivic sound design is a new, original creative tool specific to film. Film music by contrast is mostly extraneous, i.e. nondiegetic to the sound track and derived from the model of classical-romantic music, i.e. pastiche.


Films with leitmotivic sound design

Terminator 2 (James Cameron, 1991)

The Testament of Dr Mabuse (Fritz Lang, 1933)

The Hurtlocker (Kathryn Bigelow, 2008)

Katalin Varga (Peter Strickland, 2009)

The Woman in the Dunes (Sunna no Onna, Hiroshi Teshigahara, 1964)


Films with a leitmotivic music score

Jaws (Stephen Spielberg, 1975)

Halloween (John Carpenter, 1978)

Gone with the Wind (Victor Fleming, 1939)

King Kong (Cooper & Schoedsack, 1933)

The Third Man (Carol Reed, 1949)


Electronic sound and Sci-Fi


There exists a strange attraction between the Sci-Fi genre and electronic music and sound as well as 20th Century avantgarde music. Forbidden Planet (1956) is the most famous example, which had the first complete electronic soundtrack, apart from speech and ambient sound.

Synthesisers like the EMS VCS3 (by Peter Zinovieff) shaped the sound world of the films below:

  • Quatermass and the Pit (1958-59, BBC TV series; music by Tristram Carey) remade into a feature film Five Million Years to  Earth (1967)
  • Doctor Who (1963-89, 2005 – present, BBC TV series)


  • Hayward, Phillip. Off the Planet: Music, Sound and Science Fiction Cinema. John Libbey Cinema and Animation, 2004.
  • Heimerdinger, Julia. Neue Musik Im Spielfilm. Saarbrücken: Pfau, 2007. Print.
  • Heimerdinger, Julia. “”I Have Been Compromised. I Am Now Fighting against It”: Ligeti Vs. Kubrick and the Music for “2001: A Space Odyssey”.” The Journal of Film Music 3.2 (2010): 127-43. Print.
  • McCartney, Andra. “Alien Intimacies: Hearing Science Fiction Narratives in Hildegard Westerkamp’s Cricket Voice (or ).” Organised Sound 7.01 (2002): 45-49.
  • Whittington, William. Sound Design & Science Fiction. . Austin, Tex: University of Texas Press, 2007.
  • Wierzbicki, James. Louis and Bebe Barron’s Forbidden Planet: A Film Score Guide. Rowman & Littlefield, 2005.
  • Wierzbicki, J. “Shrieks, Flutters, and Vocal Curtains: Electronic Sound/Electronic Music in Hitchcock’s the Birds.” Music and the Moving Image 1.2 (2008): p27-53. Print.



Rainer Werner Fassbinder focused in some of his films on the emerging media world.

  • World on Wire (Die Welt am Draht, 1973)
  • The Third Generation (Die Dritte Generation, 1979)

From a sound perspective these films are particularly interesting anticipating our current media torrent. World on Wire has been restored in 2010. See a review in The New Yorker by Richard Brody.


The Rebirth of Music from the Spirit of Drama?


Contemporary art music and audio-visual composing

Jean Martin, March 2014

Keywords: audio-visual composing, spectral music, sound recording, emancipation of noise/sound in the 20th Century

Sunset Blvd pool hell

Sunset Boulevard (1950, Billy Wilder) opening scene

I wanted to use this image as a metaphor – not for the death of music as we have known it, but for the state of floating transition it is in.

Much of music composed in history related to specific activities in the world: It supported religious practice and ceremony, dance, dramatic opera, funerals, military or stately events. Only occasionally music was purely self-referential as in the scholasticism of Renaissance music, or the experimentations of New music in the 20th Century.

The concept of the genius (mostly male) composer, the last one was arguably Stockhausen, is on its way out. A student recently called Stockhausen a dictator. This is an exaggeration, obviously but there is a core of truth to it. Do we want the meaning of sounds being dictated to us, sitting passively in a concert hall chair? Film is equally a medium that expects the listening viewer to suspend his disbelief (Coleridge, 1817) and submit to a passive state of perception. We are locked into a chair and a time line for roughly two hours.

The word “composer” now is often replaced by “artist”, even for creators who mostly work with sound. This indicates, that organisers of sound use other forms of expression as well: still and moving image projection, sound diffusion, technology in general and live performance. Ironically the more virtual the distribution of music via the internet has become the more live-performance has gained in importance. Live-performance means that the visual Mise-en-scène has to be carefully considered as well. If you want to get noticed you have to perform live. This is why Philip Glass and Steve Reich at the beginning of their careers both have formed their own ensembles solely dedicated to perform their own music.

Thinking about the audience, the listeners, reminds us that music is a social activity. The focus in the musical meaning making process has shifted towards the listener. The composer herself is a listener in the first place. Now the listener can become a composer. Audio technology has made this possible: the fact that any sound can be recorded means, that this recorded sound, this sound object has the potential to become the starting point for a new composition. This was made easier by samplers, digital tools, which can record and manipulate any sound. DJs adopted this approach and created new genres: turntablism, hip-hop and DJ culture.


Technology played an important part in shaping the music of the 20th Century. In fact it was the electrification of music. It started with radio in the 1920s, a new powerful medium to influence the masses and to bring music to many more listeners. Today we consume music mostly through loudspeakers. What does this mean as a phenomenon? Phenomenologically speaking it homogenises what we hear: dripping water, bird song, or a string quartet, is generated by the vibrations of the loudspeaker membranes, which in turn sets the air molecules into vibration. We will never get away from the fact that we listen through loudspeakers, as perfect as the reproduction might be. Technology unifies experiences into something repeatable, similar.

Artists question practices that are taken for granted, and use reproduction technologies in unconventional ways. Walter Ruttmann had the idea to use celluloid film for editing sound in his piece Weekend in 1930, only three years after the invention of film sound and long before sound editing in any other way was possible. One could record and mix on Shellac records, but one could not edit, i.e cut records. Cage used a turntable as an instrument in his piece Imaginary Landscape No. 1 (1939). In the mid-1990s DJs used turntables for scratching and mixing, an analogue and gestural, embodied variation of sampling culture called turntablism. Steve Reich had another take on this: in 1966 he was intrigued by a recording of a black man, who talked about having been beaten by police during a demonstration against racism. Reich looped a small segment of this recording with the words “come out to show them”. He then put two identical loops on two different analogue tape machines and started the loops at the same time. Slowly they move out of phase. In the process the meaning of the words gets blurred and it turns into pure rhythmic sound material. This is Come Out (1966) by Steve Reich. Minimalism, or to avoid the –ism, process music was born. His friend Terry Riley organised a similar, but different process in his famous piece In C a few years earlier in1960.

Experiments and Outsiders

The idea to use an experiment, a scientific method, in music is astonishing. Wikipedia defines an “experiment as an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis.” [4] Apart from exploring aspects of cause and effect, a central feature of an experiment is its repeatability. This intrusion of science into art tells us something about the status of science and technology in modern civilisations and their culture. Many composers enthusiastically embrace the use of science in music: Trevor Wishart, Joanthan Harvey, Pierre Boulez, Tristan Murail and countless more.

A theme seems to be emerging here: outsiders have shaped the music of the 20th Century. Luigi Russolo, an artist belonging to the Italian futurist movement, declared in his 1916 manifesto “The Art of Noises” that from now on noises will be emancipated and at the centre of composition. Ionisation, a piece for 13 percussionists composed 1931 by Edgar Varèse put this new aesthetic into practice. Varèse coined the term organised sound, after he was frequently attacked that his compositions were not music.

Pierre Schaeffer, a sound engineer working for Radio France in Paris in the 1940s, had the idea to create a new kind of music using purely recorded, concrete sound instead of abstract note symbols which then had to be interpreted and played by musicians. In 1948 he coined the term Musique concrète. This later evolved into electroacoustic music merging acoustic recording and electronic manipulation techniques. In an unexpected twist the techniques of the rather obscure practice of Electroacoustic music have gained great importance in the creation of complex soundtracks for contemporary films reaching millions of people. It is called sound design.

Helmut Lachenmann, a German contemporary composer, gave this aesthetic yet another twist. Lachenmann creates what he refers to as Musique concréte instrumentale: revealing sonic side effects of playing an instrument – scratching, breathing, hitting etc. and putting these noisy external, “non-musical” sounds at the centre of musical exploration.

Another outsider is Iannis Xenakis, an architect by training and self-taught composer. As a collaborator of the famous architect Le Corbusier he created the Philips Pavilion for the World Exposition 1958 in Brussels. Xenakis later used the architectural drawings and adapted them as a score for large orchestra in his piece Metastasis (1954).

Xenakis: Metastasis score (1954)

Technology and the electrification of music had other unforeseen consequences: it created or inspired whole new genres. Blues used to be a very local affair on cotton farms in the far south of the United States. It was basic improvised music that kept the workers going during the hard work. The fact that some of these musicians were recorded made it possible that many more people could listen to their music, in particular aspiring musicians who wanted to learn from the masters. One can argue that audio recording enabled and facilitated the emergence of new musical styles, like Blues, Jazz, Rock and Pop. Audio recording could capture the fine nuances of improvised music which otherwise couldn’t be notated.

Philips Pavillion, Brussels Expo 1958

Philips Pavillion, Brussels Expo 1958

Towards audio-visual composing

In the 1970s Boulez had the idea to create an institute that would firmly link science and the art of music. In 1977 IRCAM was founded and still exists today. Some of the fields of exploration are analysis and synthesis, musical perception and cognition, acoustics of instruments and rooms etc. This research inspired a compositional approach called spectral music: Jean-Claude Risset, Jonathan Harvey and in particular Tristan Murail, who analysed acoustic sound materials and used the analytical data to extracted pitches from environmental sound to create scores to be performed by conventional ensembles. There is a connection here to sound design and its use in films. Combining images and tone colours is a common practice in contemporary film. Sound colour, a metaphor itself, can create mood. A great example of this aesthetic approach is Toru Takemitsu’s sound organisation for the Japanese film The Woman in the Dunes (1964, Hiroshi Teshigahara), where his textural music blends in perfectly with environmental wind sounds to create an ominous atmosphere.

More research will be dedicated in future to these synesthetic phenomena. Neuroscientists are beginning to analyse brain functions during audio-visual perception, although this research operates still far below any aesthetically meaningful level.

Programmers or artists who are able to use code can now create any work which can be described or formulated precisely in symbolic ways, i.e. in code: new instruments, whole compositions, composing sounds and images simultaneously, interactive environments which use sensors to detect movement, distance, densities, basically any form of data, which then can be mapped to any other data. A software instrument can encapsulate a whole composition and each performance is just one possible rendition of a compositional idea. Scientific and mathematical ideas of chance, indeterminacy, stochastic processes, Fibonacci series or spectral analysis and composition have inspired composers.

Sound has become of great importance for practitioners in many artistic, scientific and technical disciplines: architects, artists, sculptors, scientists using sound for data sonification, film sound designers, interactive games designers etc. Sound has become a linking art.

Technology will continue to have a huge influence on cultural creations, including music. The computer, the universal machine, seems to reunite at least two of our senses in a way, that we naturally perceive the world simultaneously through our eyes and ears and the other senses. The division of the arts into specialised fields of listening, seeing, touching etc. seems outdated and not necessary anymore. But this poses huge challenges for creators. Truly universal artists with interests in various artistic, scientific and technological fields are required to create new works that appeal to a broad audience. But it is unlikely that a composer, writer and dramatist like Richard Wagner will emerge again. Instead, collaborations between imaginative people will become more important.

Here I would like to come back to my starting point and my special interest in film sound and the complex interaction between sound/music and moving images. My question is: can collaborative work in the context of film inspire new ways of composing and possibly reaching greater audiences?

Experimentation in music has been and still is an important aspect of composing and developing new ideas. The drawback of experimental music, if one can subsume all the different activities under one term, is that it attracts usually only very small audiences. This is not surprising because much experimental music remains on an exploratory stage investigating materials and processes. But ultimately any art form needs a real audience to thrive: a good mixture of ordinary people from various backgrounds. Art and music is an extended discourse which invites dialogue and responses from the consumer. It is an expression of thinking about life and how we experience the world with other means beyond language.

Composing music and sound design for films poses specific challenges. The raison d’etre for any music and sound design, in fact for the soundtrack as a whole is the narrative and the image track of the film. This poses constraints for the time-based art of music. Film music can never follow its own logic freely developing themes or sound textures as long as it takes. It is limited by the duration of a scene which it supports or comments. Constraints can be the best source of inspiration and creativity as one can beautifully observe in Lars von Trier’s collaborative film The Five Obstructions (2003 with Jørgen Leth).

In my analyses of films I observed that film composers adopt two fundamental musical approaches: on the one hand the thematic concept of music using Leitmotifs and harmonic tonality. On the other hand composers practice a timbral or spectral aesthetic which expresses itself through complex textures and drones. It is closely linked to sound design, which emerged from the electro-acoustic music tradition and the 20th Century aesthetic of musicalising environmental, indeed any recorded sound or noise.

Most music in commercial film is pastiche recycling existing aesthetic approaches and techniques. But digital technology has enabled new ways of audio-visual composing. For example audio-visual sampling is frequently used to express non-normal states of mind: drug-induced hallucinations, psychosis, dreams, the overstretching of the nervous system in war etc. Michel Gondry and Chris Cunningham have playfully and artistically pushed the boundaries of audio-visual expression. Darren Aronofsky in his film Requiem for a Dream (2000) or Lars von Trier in Dancer in the Dark (2000) with music by Björk and in a lead role as well have integrated these experimental techniques into main stream film culture. Some film sound theoreticians have argued that sound design is the new score.[5] The related practices of field recording, acoustic ecology and soundscape composition indicate a new interest, both commercial and artistic, in exploring and working with environmental sound in highly differentiated ways beyond the purely musical. This aesthetic approach acknowledges the emotional and evocative power of sound. Much research has been done how we actually listen and perceive sound.

New audio-visual experiments are happening outside the mainstream film industry. Music videos in particular used to be an area of immense creativity. Current computational practice has expanded this. With programmes like processing, SuperCollider, MAX/MSP and PureData (pd) truly audio-visual composing has become a possibility, at least on a technical level. What is missing is using these new techniques to address themes and questions of general human interest: the temporality and precariousness of existence, power, inequality, love, trust, struggle, death etc. On this level Richard Wagner is still dimensions ahead.

In his late film Playtime (1967) Jacques Tati plays with seemingly trivial, everyday situations of modern life, which at closer look and careful listening create dissonances in our audio-visual perception. These dissonances create effects that are comical. By his unusual interpretation of modern life Tati creates awareness of its absurdities revealing ideology, i.e. the blind belief in progress and constant movement which is not going anywhere (the roundabout as a metaphor for merry-go-round going nowhere).

[1] This process of disintegration, or positively, expansion of tonality started much earlier. A crucial moment was Wagner’s Tristan und Isolde, 1859.

[2] Cage: Experimental Music (1958: 10): “…one may give up the desire to control sound, clear his mind of music, and set about discovering means to let sounds be themselves rather than vehicles for man-made theories or expressions of human sentiments.

[3] Cage: The Future of Music: Credo, 1968 p. 5

[4] (viewed Mar 2014)

[5] Kulezic-Wilson , D. (2008). “Sound Design is the New Score.” Music and the Moving Image 2(2).

Nebraska (2013)


A film by Alexander Payne. The film is in under-exposed black and white tones, which creates an almost irritating somber atmosphere. At the the same time the dialogues are humorous. Music by Mark Orton. Orton’s music connects to the place – the flat and boring American Mid-West. It also evokes the melancholy of the fading life of Woody Grant, the father of an ordinary family, and his surreal and delusionary phantasies of having won one million dollars (a sales scam). At least it brings son and father closer together. The music is modest in the sense that it doesn’t prescribe what you are supposed to feel. At the same time it is well composed, evocative and atmospheric, creating a mood of the places of the story.

Silent Film – Implications for contemporary film practice


Silent film, or mute film, as Chion prefers to call it, since the only element missing was dialogue – silent film is still relevant today.

First, long before film technology was invented and technically realised, certain inventors (Muybridge, Nadar et al.) had a dream: to capture movement, colour and sound, as we perceive them every day.

The French film critic and theorist André Bazin quotes Nadar in his essay The Myth of the Total Cinema:

“My dream is to see the photograph register the bodily movements and the facial expressions of a speaker while the phonograph is recording his speech” (February, 1887). (Bazin 1971, Vol 1: p.20)

Bazin continues:

If the origins of an art reveal something of its nature, then one may legitimately consider the silent and the sound film as stages of a technical development that little by little made a reality out of theoriginal myth.” It is understandable from this point of view that it would be absurd to take the silent film as a state of primal perfection which has gradually been forsaken by the realism of sound and color. The primacy of the image is both historically and technically accidental. The nostalgia that some still feel for the silent screen does not go far enough back into the childhood of the seventh art.The real primitives of the cinema, existing only in the imaginations of a few men of the nineteenth century, are in complete imitation of nature. Every new development added to the cinema must, paradoxically, take it nearer and nearer to its origins. In short, cinema has not yet been invented! (Bazin, Andre, and Hugh Gray. What Is Cinema? Vol. 1 + 2. Berkeley ; London: University of California Press, 1971. (Vol. 1; p. 21)

The gradual technical realisation of this dream meant, that the inventions of moving image and sound reproduction technologies happened at different historical stages. Interestingly this separation is still maintained today in professional film production: sound and image are recorded and processed separately.

Which practices of silent film are still in use today?

  • Silent film has defined how music was used with moving images. Even today composers, e.g. John Williams pursue such an aesthetic in their music.
  • The viewer can easily imagine implied sound which are evoked by images, gestures, music or intertitles. Not everything we see in the images has to be duplicated by sound.
  • Intertitles are an elegant way of indicating a change of scenery or promoting the narrative. By contrast, a voice over explaining e.g. in a documentary what is going on is extraneous to the location soundtrack.

The vococentrism of many films has been criticised since the commercial introduction of sound film around 1927. René Clair despised the talkies as photographed theatre. “It (the talkie) has conquered the world of voices, but it has lost the world of dreams.” (in: Weis & Belton, 1985: 95).

So how did later film makers respond to such criticism through their films?

Two examples come to my mind: Play Time (1967) by Jacques Tati; and Belleville Rendezvous (2003) by Sylvain Chomet.

Both films don’t use dialogue. If one hears voices, as in Play Time, they are just sounds like any other and don’t carry much semantic meaning.

The two films follow opposing strategies in their sound usage. Play Time has realistic images, which through their careful selection and mise-en-scène feel hyper-real: they depict a glittering, shiny and steril modern world. The sounds, on the other hand, are completely reconstructed in the studio. The atmospheres and environmental sounds are highly reduced in their realistic detail. Tati focusses instead on carefully selected sound events, which augment certain realities which are normally overheard. This has a comical effect.

In Belleville Rendezvous Sylvain Chomet has used animation of drawn images. The image track is rich, but reduced compared to photo-realistic images. This visual reduction is compensated by a hyper-real soundtrack with beautifully recorded environmental sound, synchronised and mixed to the image track in a highly flexible and dynamic way. It is a mute film, i.e there are no dialogues.

The transition from silent to sound film has been successfully dramatised in films, e.g. in the classic comedy Singin’ in the Rain (1952, Gene Kelly, Donald O’Connor) and more recently in the Oscar winning film The Artist (2011, Michel Hazanavicius).

UK’s favourite film soundtrack


The BBC staged a public vote for the “best film score” in the last 60 years. Star Wars by John Williams won this competition. In a way such a vote completely misses the point of music in the context of moving images.

The idea of voting for the best film music is problematic, because it encourages a concept of film music as an autonomous art. The main focus was on the music itself, and not on music as being part of an audio-visual work where all the elements interact with each other – story, dialogue, images, sound, music. The reason for music being part of a film soundtrack is the story, the narrative. Music never stands for itself in film.

Instead listeners and the jury were encouraged to focused on

a. the music itself

b. the memories and asscociations to the film and the experience of having watched it

c. the harmonic tonality and not the sound

UK’s favourite film soundtrack

The BBC’s Sound Of Cinema Sound-track list is:

  1. Star Wars Main Theme (John Williams)
  2. The Good, The Bad And The Ugly (Ennio Morricone, arr. David Arnold)
  3. West Side Story (Leonard Bernstein)
  4. Lawrence Of Arabia Main Theme (Maurice Jarre)
  5. Vertigo (Bernard Herrmann)
  6. The Third Man (Anton Karas arr. Cox)
  7. Dark Knight Rises Suite (Hans Zimmer)
  8. Grease (various)
  9. Sound Of Music (Richard Rogers/Oscar Hammerstein II)
  10. Apocalypse Now (Wagner)
  11. Psycho (Bernard Herrmann)
  12. Django Unchained (Bacalov/Morricone)
  13. Mary Poppins (Sherman brothers)
  14. Billy Elliot (Swan Lake music, Tchaikovsky)
  15. The Wizard Of Oz (Harold Arlen, EY Harburg, Herbert Stothard)
  16. There Will Be Blood (Jonny Greenwood)
  17. Planet Of The Apes (Jerry Goldsmith)
  18. 8 ½ ‘Otto e Mezzo’ (Nino Rota)
  19. Sholay (RD Burman)
  20. Bombay (AR Rahman)

(quoted from )

Food cinema


People have ideas in Berlin! Look at this:

Every friday Speisekino Moabit shows movies and serves a menu going along with it. Entrance is free, whoever wants can get a 5,00-Euro menu.

Sampling in Electroacoustic Music


Simon Waters, a composer and theorist of electroacoustic music, analyses the practice of sampling:

Sampling as a technique is paradigmatic of the uneasy relation between tradition and innovation, incorporating the archival instinct of the former and the speculative and exploratory impulse of the latter. Sampling can be regarded as the ultimate time-manipulation tool, the ultimate musical tool of repetition and therefore of recontextualisation. (Waters 2000: 71)

Sampling can be regarded, then, as representing an important step in the re-empowerment of ‚listeners’ as composers, both in the sense that new configurations of familiar sounds encourage ‚listening again’ and in the more profound sense that sampling blurs the distinction between technologies of production and reproduction and therefore between composer and listener (Waters 2000: 76).

This aesthetic approach to sound has been increasingly adopted by film makers and sound designers.

Waters, Simon (2000). ‘Beyond the acousmatic: hybrid tendencies in electroacoustric music’ in: Simon Emmerson (editor): Music, Electronic Media and Culture, Aldershot, Ashgate 2000, pp. 56-83.

Barry Truax – on soundscape composition


Truax[1] compiled a list of criteria for a soundscape composition which can be applied directly to film soundtrack design (quoted from Drever 2002: 22):

(1) Listener recognisability of the source material is maintained, even if it subsequently undergoes transformation;

(2) The listener’s knowledge of the environmental and psychological context of the soundscape material is invoked and encouraged to complete the network of meanings ascribed to the music;

(3) The composer’s knowledge of the environmental and psychological context of the soundscape material is allowed to influence the shape of the composition at every level, and ultimately the composition is inseparable from some or all of those aspects of reality;

(4) The work enhances our understanding of the world, and its influence carries over into everyday perceptual habits.  

Truax summerises the implications of soundscape compostion:

The soundscape composition deals not only with listeners’abilities to identify and make sense out of acoustic environments and how they change, but also with the patterns and habits of listening and memory (Truax 2002: 11).

An awareness of contemporary listening habits and sonic memories is crucial for a film sound designer. A prospective film sound creator can learn from these artistic soundscape compositions.

In his latest article (“Sound, Listening and Place : The Aesthetic Dilemma.” Organised Sound, 17(3), 2012, Cambridge:  Cambridge University Press) Truax distinguishes between Sonification, Phonography and Virtual Soundscapes. Truax uses the concept of mapping to connect these areas of sound practices.  Sonification, comparable to visualisation, maps (usually scientific) data, like weather data, onto sound. Phonography, an oldfashioned word derived from Edison’s phonograph, maps real-world soundscapes as untampered and unedited as possible onto recordings. The notion of untampered, “authentic” location sound is ideological and not sustainable in the digital era. I find John Drever’s suggestion to adopt a reflective, ethnographic approach to sound recording useful, in particular for the process of artistic soundscape compostion or film and game sound design. Virtual soundscapes are created for films, video games or purely artistic purposes, exploring ideas of sonic story telling, oral history and local geographic memory.

[1] Truax online (viewed Nov 2011)

Drever, John Levack (2002). ‘Soundscape composition: the convergence of ethnography and acousmatic music’ in: Organised Sound 7(1): 21–27, Cambridge:  Cambridge University Press.

Truax, Barry (1999). Handbook for Acoustic Ecology. (editor) No.5 The Music of the Environment Series, The World Soundscape Project (also published in 2001 on CD with Acoustic Comminication).

——— (2001). Acoustic Communication, 2nd ed. Westport, Conn. ; London: Ablex Publishing.

———(2002). “Genres and Techniques of Soundscape Composition as Developed at Simon Fraser University.” Organised Sound 7, Nr. 01: 5-14.

———(2008). “Soundscape Composition as Global Music: Electroacoustic Music as Soundscape.” Organised Sound 13, Nr. 02: S. 103-109, Cambridge:  Cambridge University Press.

———(2012). “Sound, Listening and Place : The Aesthetic Dilemma.” Organised Sound, 17(3), 2012, Cambridge:  Cambridge University Press.