WU Chengyu

30/04/2025

Main Concept
On my last project (the mid semester project), I focused on hearing the repetitive life as what it is by recreating the soundscape of one’s daily routine. In this project, I want to push it one step further and explore the possibility of listening to the life as music.

Speaking about listening, there are usually three listening modes introduced by Michel Chion (1994) and Pierre Schaeffer, which are causal (source-oriented), semantic (context-oriented) and reduced (sound-oriented) listening. From my point of view, these are more or less depending on the sound source itself: no matter listening to identify the source, understand the context or simply enjoy the sound itself. However, what I want to stress on is slightly different: one hears some sounds and then composes them into “music” in his mind. This is not just sketching a music piece in mind like “I want to have a guitar playing energy chords”, instead, it is more like to hear the “rhythm” and “melody” out of the surroundings and then combine them into a “real time improvisation concert in mind”: imagine people going to a construction site and then start raving to the beat of the drills like it’s a techno club. In this way the act of listening depends more on the perceivers as they are required to actively listen to the source to realize its potential and therefore the process of listening might become to, firstly, one of the three basic listening modes, then with extra steps of deconstructing and re- composing.

Execution
To bring the concept into an actual piece, I decided to create a music piece only with field recordings. I went to a big park where I can get both sounds of people and sounds of nature. The main technique that I used when doing sound design for these recordings was to heavily resonate certain frequencies (e.g. 440Hz which is A3 in key). In this way the sound would have both the note in tune and the texture of the actual recording, which makes the sound a lot more interesting. I then adjust the envelope (ADSR) of the sound to shape it into different “instruments” like pads, plucks, kicks, snares and so on. After that I throwed all kinds of effects accordingly, like delays, reverbs, chorus, distortions etc. Finally, I used those sounds to compose a generative ambient music piece in Ableton Live.

For the mixing, I rendered the stereo stems from Ableton Live and mix them in REAPER with those IEM plugins. I managed to mix them into a 2nd order ambisonics file.

Reflection
I’m quite pleased with what it comes out, especially using only field recordings. However, I think I’m rather poor at mixing in ambisonics, as I don’t know how to achieve some of the techniques that I use in stereo, like side-chaining. Also, I find it not very easy to mix with monitor in binaural as some spatial information is not really hearable. I might need to have more tutorials on it.

Recording Used
1. Ambience of the park

2. Taping on railing with fingers

3. Construction site

4. Footsteps

5. Kids “marching” and yelling

6. An Opera group doing street workshop

7. Sweeping plants

8. Knocking rails with an umbrella

Reference
Chion, M. (1994). Audio-Vision: Sound on Screen.

Go toTop