Category Archives: Post Production Work

‘Enchanting Rupert’ and ‘It Happened One Day’ at the Viewster Online Film Festival

Hello everybody!

Good news to spread – two short movies, for which my company futuresonic did the audio post production work, are now showing at the Viewster Online Film Festival. For a limited time you can watch, comment, rate, like and share the movies using the Viewster platform. Both are romantic comedies, so if you are feeling romantic these are a good watch – just click on the posters and you’ll get right to the movies!
I posted more stills on the futuresonic blog – have a look at them here.
Every vote counts, so if you like the movies please share your opinion on the Viewster platform.
Thanks for watching!
Viewster Online Film Festival VOFF Logos voff2blog
Enchanting Rupert - Viewster Online Film Festival ER_Poster
It Happened One Day - Viewster Online Film Festival IHOD_Poster

Removing excessive natural reverberation in audio post

One problem that you may encounter in audio post work is a large amount of natural reverberation recorded on the sync tracks. In the worst case this will make your dialogue difficult to understand as well as give you problems to match other (more closely mic’ed) dialogue to it.

In this post I want to share two approaches to diminish this natural reverberation on the track leaving you with cleaner audio to work with.

Solution 1: Expanding

You’ll need a downward expansion plug-in (ie a gate that has a ratio setting) such as Avid’s Expander Gate Dynamics III. Use a low ratio (1.1 – 1.4), fast attack and set the threshold to 0 dB. Leave hold and release relatively short. You will notice that the reverb tails at the end of words drop significantly. Of course this does not remove early reflections, but at least it kills annoyingly long decay times, leaving you with more intelligible dialogue. The higher the ratio, the more intense the effect.

Removing reverb tails on sync sound using a gate

Solution 2: Using phase-inversion, EQ and compression

This is slightly more complicated but gives you more freedom with regards to which frequencies you want to affect. In the screenshot below, I have duplicated my problematic track and processed the audio on the duplicate using an equaliser with just the phase-inverter switched on. Playing back both tracks you should hear – nothing (if you still do, check that all automation on both tracks is the exactly the same).

Original and phase-inverted track

Now, add a compressor to the phase-inverted track, use settings like these (important: fast attack, low threshold, low ratio):

Compressor setup

If you need a more agressive effect try to raise the compression ratio (you will first notice that the sound becomes more and more similar to the original since you are adding less and less of the phase inverted signal) and then raise the output gain of the compressor until you are satisfied with the result. Here’s an example:

More extreme compression settings

Last but not least: If you want to tweak the frequency response of your effect, insert an equaliser before the compressor. Experiment with low- and high shelves for a start and see how they affect the sound. Interestingly, if you shelve off the highs of the phase-inverted track the low and mid-frequencies become more accentuated and vice versa.

Finally, here’s a screenshot showing the original audio on top and the three approaches discussed (simple gating, phase-invert and compress with low ratio and phase-invert and compress with high ratio) below.

Reducing reverb tails - results

That’s it for now – I hope you liked the read. If you’ve got any feedback or want to discuss results, feel free to leave a comment!

Signing off…

Norbert

One more sound design challenge…

Shaun Farley’s monthly Sound Design Challenge has been really interesting this time (as always…) – creating a Yeti! Here’s Shaun’s official site post, and my entry:

David Sonnenschein and Shaun Farley discuss my Sound Design Challenge Entry

It was a great pleasure for me reading David’s and Shaun’s impressions on my sound design challenge entry. It’s fantastic to see what they think about my work. The original entry can be found on Shaun’s Dynamic Interference site. The books they mention are a must-read! Here’s a copy of the original text:

Analyzing The Sound Design Challenge: SDC003

I’m pleased to bring you another conversation with David Sonnenschein. This time around, we talk about October’s Challenge, Metamorphosis, and Norbert Weiher’s entry. You’ll probably also notice, if you haven’t already, that we tend to use these to talk about any related topic that pops into our head. However, no unworthy conversation points will be found below.

Shaun: So just before we begin, for the people who haven’t been to the site before or maybe weren’t around for this particular challenge, the challenge was to take one of three sounds that I posted and morph them into five new sounds.They could only use four processes for each sound. And for the fifth sound, they could still use four processes, but they were only allowed to use EQ’s to create that sound. So, that was kind of where we went, and we got a wide variety of sounds and interpretations out of those sounds, which was very interesting, but obviously Norbert won.

I know you’ve listened to his entry. What are your first impressions of his sounds?

David: Well, I was really impressed by some of the realistic aspects. How it felt in listening to, certainly the first three of them, that they were so clearly defined. I could close my eyes and see what was happening. And also, not just having a single ambient sound, but an actual story with beginning middle and end. That really impressed me, greatly. That he was able to hear it in his mind, and then really create something. And in reading his interview, that definitely was part of his creative process; to be able to pull out of a pallette of sound and frequencies, a very clear image. That was really what impacted me most.

Yeah. He had an interesting process where he took a little time kind of exploring the sounds to find what elements he could discover that were in there. Then he decided on what he was going to make out of those elements.

Right. And you know, that reminded me a bit of what he termed procedural audio, and what’s being used a lot in working with signal processing. Using Pure Data, is the name of one of the audio programming languages, that people have worked with. And I’d like to refer anybody who wants to know more about that to an extraordinary book called Designing Sound, by Andy Farnell. This work that Andy does is somewhat similar to the process that Norbert used, which is to get down to the real basic frequencies of what’s creating any specific sound.

Andy diagnoses it, a little bit, from the specfics of how the sound is being generated…the actual vibrations; for example, a cricket, and looking at the friction that the wing has on the leg. Or a tea kettle and the vibration of blowing the steam out of the little hole. He will create mathematical algorithms out of that, and then send in a signal through those and be able to modify those sounds.

Norbert has done it in a similar way with a spectrum of sound; that then he has selected how to modify. Not with a mathematical alogorithm in this case, but plugins that are essentially doing a similar transformation. I think that that’s, creatively, a similar kind of approach to what Andy’s been doing.

Andy’s an amazing font of knowledge, but I haven’t gotten to his book yet. It’s sitting around waiting for me to pick it up. It’s a voluminous tome, to say the least. [laughs]

Oh, it is. I met Andy and spoken with him, and actually did an interview for my coming second edition of my own book, Sound Design, which will include interactive media. So, I’ve been collaborating with him a little bit in that area of education as well. It’s very useful for gaming, where the story is being modified by the player. And in this case, Norbert is the player.

He’s playing with the sounds and creating things out of that; in a much more refined way than a video gamer would be. But he’s moving around in space. He’s changing perspectives. He’s creating images with sound, and that’s what I really admired so much with his approach here.

Yes, and his identification of those elements, and what he could do with them, was so well thought out. He did all of these sounds with the basic plug-ins that come with the Pro Tools system. He didn’t go out looking for all the crazy wacky Waves plug-ins or Sound Toys. He was able to do this with some of the more basic plug-ins, and I think that’s a testament to what can be done with sounds if we really sit down and think about how to approach the design.

Exactly, and one of the things that he’s done successfully, I believe, is he’s listened to his own world of sound and created an internal ear for himself. To be able to pick out frequencies in this case that are in this original sound that was posted, and be able to hear in those individual frequencies, other things. Certainly without breaking your restrictive rules on what he could do, he could listen in all sorts of ways. Isolate different frequencies, for example, and say, “Ok, I’m going to do this with it,” and then take it and move it around.

So, I think that what he’s demonstrated, as have many people in this month’s challenge, is that we as sound designers and editors have the capacity to manipulate what we’re hearing in a very, very, specific way…if we know how to listen properly. And I think that’s a skill like a musician learning, not only to play in tune and on the beat, but to be able to integrate very well with other musicians; or within a composition that is needing, let’s say, a particular complimentary tone or counterpoint. So it’s really the sign of a good composer in the music world, that they can imagine, in their minds, what is going to make the effect that they want. It is, I think, a talent, but it is something that we can be trained to do.

This is something we’ve mentioned in the past. For anyone who’s read your book in the past, or perhaps for a few dorks like me…Chion’s books, or even any of these previous interviews, might recognize the term “Reduced Listening.” One of those wonderful listening modes. Why don’t you take a moment to talk about, for anyone who may not have read about it in the past, what “Reduced Listening” is?

I have to credit Michel Chion, who originated many of these ideas and terms.

Yeah. Audio Vision, The Voice in Cinema…

Yeah. Audio Vision is his main work that he did about 30 years ago or so, and he’s written several since then…in Audio vision, which I highly recommend for anyone working in film sound, he speaks of three different listening modes; and I, in my book, added a fourth that I feel is pertinent.

The “Reduced” listening mode is really listening to the physical quality of the sound; if you were to look at the soundwave and describe its amplitude, its frequencies, and of course there are many other aspects such as the envelope itself. If you’re talking about several sounds. You could be talking about the rhythm, the harmony. It’s really the way we would look at it as a physicist or a sound engineer, and what we might be using in terms of plug-ins and digital processing to manipulate the sound. It’s not generally the way we listen in our everyday awareness; except when we say, “Oh, that’s too loud,” or “That’s too high,” or “That’s screeching of the fingers on the chalk board.” You know that last is known as “Causal,” that’s where it’s coming from. You would say that sound is irritating, or that sound’s annoying. The reason it’s sounding that way is because the harmonics are really, really, beating in our ears and making a physical sensation happening.

So, the second kind of listening mode, as I mentioned, is “Causal,” which is really the source. Where is it coming from? What do we call it? It’s a dog barking, or it’s a car going by. That how we find our sounds in sound libraries. That’s how we label them if we’re in conversation.

The third one is called “Semantics,” which is really about meaning. And the meaning could be the feeling of it, like the scratching on the chalk board is irritating. Or it’s a signal like a police siren, telling you something. Telling you to pull your car over. And of course words are semantic meaning; they symbolize something other than the sound itself.

The other one that I’ve added is called “Referential” listening, which is about the context of the sound. Depending on what culture you’re in when you hear something…it may mean something to one culture, and something different to another. Or historically when you hear the sound of a telephone, it depends on what decade you’re making that movie; the 50′s are going to sound different than the 1980′s, than a cell phone in 2000. This is going to be an identification for the listener and the audience, as to where are we and when are we.

So, that’s the basic listening modes. In this case, of Norbert’s work, he’s really taken something that is kind of a pure “Reduced” quality sound…a kind of a buzz, I’m not even sure where it came from. It sounds like…I’m not sure what the causal was…

It was a bit of electrical interference that I recorded using an unbalanced telephone recorder pickup going straight into a sound recorder. I basically just went around to different electrical devices and recorded their individual forms of electrical interference. So, I just happened to pick that one out of the group.

And so, what you just described is the “Causal.” What we say “Semantically,” it might generate a certain feeling in people of unease or discomfort, or it might feel a little bit sci-fi or something like that. But the “Reduced” quality all the participants in the Sound Design Challenge were going inside of what’s really there physically in that waveform; and it was very, very, rich. You offered them, as if it was a pallette of colors, of many, many, different colors to play with. Literally frequencies, and so you can separate them very easily into different components.

Some of them, not Norbert’s, but some of the others I listened to the last one, where they just changed the EQ, I’m really curious about how they got some of those sounds the way they did. Because when I change EQ, I usually isolate certain sounds, but it sounded like little…bouncing balls and…beeps all over the tonal scale. I’m very curious how they used EQ to do that.

Well, one of the things that may have helped them do that was that I said they were allowed to cut as much as they want. They could do as many cut and paste, trims or edits as they wanted. That was, kind of, “free.” Volume and pan automations…things of that nature. Basic manipulation of the sound that isn’t changing its tonal characteristics.

I see, so they probably cut little clips of it, and just little tiny tones out of that.

That’s…my guess. [laughs]

Yeah, but it sounded like more than four different tones, and I was very impressed with it. It was reallly fun. Regardless, the idea of giving a restriction…it’s like a haiku poem, you know? You’ve got to work in the five-seven-five syllables, and within that…an infinite number things. So you gave people a way to not only practice their creativity, but when you’ve got a job and you’ve gotta deliver AND you want to be creative, sometimes it’s just better to limit yourself. To say, I’ve gotta do it in this amount of time, and with this many plug-ins, and just get on with the job.

So, I think it was a really important exercise in producing something under the gun. I’m just totally impressed with that. Now, Some people may have spent many, many, hours doing it. You didn’t give them any time restriction of course. [ed. Other than the usual submission deadline.]

Well it gives them a chance to experiment in ways they may not when they are under the gun. So, hopefully, it helps them develop new tools and techniques so that, when they are, they can just pull them out of the bag real quick.

Exactly.

Now, back to the listening modes…of the creation of something so clear…I think that Norbert, and those of us who are looking to create story with sound, are really looking for the beginning, middle and end of the story. There’s just such a richness when you create something like that out of nothing. I mean it’s very magical, in a way. It’s very alchemical that he could imagine this and take these simple sounds that you gave him, and create something else out of it.

I have many exercises in my classes, in my book too, to promote and provoke that kind of creativity…and he just has it naturally. I’m really impressed with that.

Yeah, he created some really…the sounds are very deceptive. They’re very close to what he was going for, and if you’re not paying attention very closely you would accept them for what he was trying to create. So, it’s very interesting…he took them from that base electrical interference sound “Causal,” and he really shifted them to, if you listen to them without knowing where they originally came from, you’re going to identify them as a different “Causal” element.

That’s true. Of course, in film sound, you just need to add a little bit of image, and then you’re locked in. The brain will completely embrace that as being the real sound. The helicopter sound, which seemed to be a popular rendition, really reminded me of “Apocalypse Now” and the helicopter sound being synchronized to the fan in the room where Martin Sheen is. Walter Murch did a lot of work to get that sound as an iconic element in the whole film. So, it’s wonderful to hear this work.

Well, that kind of ties in with Michel Chion’s assertion, that he apparently takes a lot of flack from his French peers for, that we don’t watch a film as a visual piece and we don’t listen to it as an audio piece, we “view” it as an audio-visual piece. The audio doesn’t exist without the picture and the picture doesn’t exist without the audio, they’re genetically entwined.

Well, I think he comes up with that, because that’s how our real world is. We get outside of the idea that we’re making movies, and we just wander around in the natural world and the urban environment and our social circles…pretty much everything has its natural marriage of image and sound. And we don’t pull them apart, except in very unusual circumstances.

Another theoretical model that I’m publishing in the next few weeks, is actually going to be coming out in the journal “The New Soundtrack” [ed. Publication is released in March], is a model called “Sound Spheres.” When there is a place that we don’t see a sound associated with an image, and this isn’t just in film…this is anywhere, and we cannot identify it, it creates a lot of energy in our conscious awareness. Because we’re either going to laugh, or we’re going to be scared, or we’re going to want to find out, “What is that source?” We want to identify it, and usually we look around the corner, or we ask a question, somehow we try to identify the visual image that’s assoicated with that sound if it’s not already there.

So, our real world, and our brains, are wired to have these things come together. That’s why I believe Chion’s theory holds up in film.

Oh, absolutley. It’s just funny, to me, because I have a copy of Claudia Gorbman’s recent translation of his compendium Film, A sound Art, and he mentions that that view is fairly widely accepted abroad…but not so well in France. [laughs]

Well…[laughs]…I’d be kind of curious to hear the rationale for the non-acceptance, just because I’m very theoretically involved in this. And I would like to understand what the rationale is for not accepting that. Does he go into that, or is it just a reference to somebody?

It’s just a reference in the opening of the book, not to anyone or thing specifically, but it’s an interesting situation there. [laughs]

[laughs] Well, maybe one of the readers will know about that.

Perhaps.

That would be interesting.

But Norbert’s piece…you know…going along with that, the bulk of the sounds that he created…they’re immediately recognizable and identifiable. There’s very little, except for one piece where he goes a little more impressionistic…

Yeah, I want to play that again while I’ve got it up, just to remind myself and make a comment about that…

Yeah, the one that he kind of labelled as “Astral Monks”

Yes.

Is that what he called it?

Yeah, I believe so. [ed. He actually called it “Space Monks Chanting.”]

It sounds like a chant, that is almost done through a vocoder is what it reminded me of actually.

Yeah, it reminded me of that too. So, it was still kind of grounded, because it had that vocal element in it.

Yeah, well, my guess is that there is no REAL vocal in this., but it hits our ears, and we identify it, as if it were vocal…which really is also exciting to play with when you’re working with, say, monsters or alien languages…things like that…to realize that you can create the sounds almost from pure white noise. Or sine waves, whatever you’re starting with. You can make vocal sounds.

Now, a lot of those sounds, when we hear them, sound like robots…when…character recognition, optical-character recognition…I think it’s the other way around actually…where they take text and turn it into a verbal rendition, doing this kind of phonetically. Often it’s hilarious, because pronunciation is totally off. It sounds a little like the accents are wrong, but the machine is recognizing the letters and producing some kind of sound that is just being produced through an algorithm.

The other way to do it, of course, is to record a bunch of little syllables of a real person, have them come in and sample them…but what I think he’s done here is sort of the pure overtones that sound like a vocal change. I play the didgeridoo, for example, and it really reminded me a lot of what you can do with a didgeridoo. You can actually vocalize along with the tube resonating with the lips vibrating. So, it had a lot of similarities that he got just out of this one waveform. So, I’m impressed with that.

Yeah, he definitely had a lot of well rounded work going on here. He really displayed his skills, and…I’m not surprised at all that he won. [laughs]

Yeah, so we’ll look forward to the next Challenge.

Norbert wins Sound Design Challenge!

I am honored to announce that I have won October’s Sound Design Challenge on Dynamic Interference. The site’s host Shaun Farley conducted an email interview with me afterwards which you can read below. The original entry on Shaun’s site can be found here. Here’s the interview:
To start off, would you please tell us a little bit about your background? How you got into audio, what you’ve done and what you do now. Simple little things like that.
I was always interested in audio – in the beginning completely from a musical side. My parents gave me a good background in piano, later I moved on to playing bass and electric guitar. Since I had a big affection towards computers in the 80s I started coding some MIDI on my old Atari ST. However I never thought about pursuing a career in this field – mainly because in my home town there was no way to get into sound engineering and music was considered a safe way into poverty… So I became a chemist by trade working at various universities and kept audio and music as a hobby for a long time.
When I moved to Manchester in 2004 I started re-thinking the matter. I realised from small demo recordings I did with my own band that I really like to do this. So I embarked on a sound engineering course at the School of Sound Recording and straight away started to set up my own studio and work with bands.
After completing the course the school offered me a position – and after weighing the pros and cons I decided to quit my career as chemist and fully dedicate myself to sound. I started out in the technical department assembling and servicing computers and studios. On top of the course that I just finished this gave me a very profound technical background.
During this time I was offered my first job in post production. Since I love watching movies and meddling with sound it was only natural to follow this direction – so I reduced my work with bands and changed the focus of my studio to post production work.
At the school I was later offered to move more into teaching and course development – skills that I had brought from my work as a chemist at various universities in Europe. I developed and delivered various courses for film and TV production. During that time I also trained myself to become Avid as well as Apple Certified Instructor as part of my professional development. I also became ‘Expert’ in Pro Tools – I did almost every course of the Avid curriculum that was on offer.
In early 2010 came another breakpoint – my wife and I got the chance to move to Brazil. So since April I am now working as a freelancer for audio production as well as instructor in the southern hemisphere.
Wow. And I thought I bounced around the place before figuring this field was what I should pursue. You mentioned in an earlier e-mail that felt your approach was very “real world.” I might agree with you for the first two sounds you created, but I felt your sounds progressed into a more impressionistic realm as I was listening through. Care to comment?
I find it quite interesting how many engineers I know in my age group (ie. mid-30s) have moved from another career into sound or video production. When I did my trainer certification for Apple none of the participants of the course started out immediately in the media industry!
When I started to work on the challenge I felt it would be interesting to transform the source sound into something we can recognise from the ‘real world’. With this in mind I created the helicopter, the submarine sonar and the heartbeat / alarm. All of these were pretty straightforward after doing some initial time-stretch tests with the samples to see what extremes they can yield.
I guess what made the sonar and the heartbeat / alarm interesting was that they form scenes rather than being just a single sound. The heartbeat / alarm combination was actually inspired by Esteban Misito’s winning entry to the last challenge.
It was the EQ part (which I found a really interesting restriction) which forced me to be more experimental. I played a lot with automation and layering to arrive at the spaceship engine. Finally the space monks are a result of ‘throwing plug-ins at the sound’  – no challenge without fun!
Did you instantly know which sound you wanted to work with, or did you do some experimentation before deciding? What made you choose to work with the electronic interference sound?
I first gravitated towards the hoover but after passing all three sounds through a spectrum analyser (Inspector by RND – great free plug-in by the way…) I found that the electric buzz would be the best one to go with. It’s a constant sound that contains a defined set of harmonics which are relatively easy to separate. I did some quick time-stretch and EQ tests with all three sounds, just to hear the extremes and to see how well I could isolate individual frequencies.
When I tried to isolate frequencies in the hoover sound no matter how narrow I chose my Q and how steep I made the high- and low pass filters I always had the impression that it was still ‘noisy’. The ‘metal table’ on the other hand I found difficult to loop because of its strong decay.
That’s an interesting analysis of the audio files. I’m wondering how many other people used a similar process, if any.
You obviously got some great sounds out of your selection, but I really liked the sonar sound you created. It’s so spare and focused compared to the original material. How did you pull that one together?
The electronic interference had a pronounced harmonic close to the characteristic frequency of a sonar – the trick was to extract this properly. Since my EQ unfortunately only goes to +/-18 dB I needed to process the sound twice with the same settings (max. boost at around 1k and really steep HPF and LPF on either side) to reasonably remove the rest.
Then I got myself a sonar as a reference and checked the pulse length and timing. I cut my freshly created sine wave (a signal generator would have been easier, for sure…) in exactly the same way and added volume automation to reproduce the sound of the pulse better (as loud as it gets in the beginning and a dip in the middle of each pulse).
Then I layered a pitched-down version of the original interference with some volume and pan automation on top and added a lot of reverb to create that characteristic sonar decay – pronto!
So you really got down to an almost granular level there on that particular sound. Did you approach any of the other sounds you created in a similar way?
I was really amazed what you can do with a simple hum if you really push it. The sonar was my personal favourite (and surely took me most time to make).
Fortunately I didn’t need to go that deep for the other sounds – most of the time I applied pitch-shifting followed by EQ to enhance the spectrum in the areas that I needed. To illustrate, the heartbeat / alarm was done with two constant pitch shifts and two EQs; the helicopter with a time-dependent pitch-shift, an EQ, a flanger to simulate interference of the source sound wave with its reflections from the ground and – of course – reverb.
It might be interesting to know that I did everything with the standard Pro Tools LE plug-ins – EQ3, D-Verb, the new AIR plug-ins and elastic audio…
Those plug-ins are a lot better than many people give them credit for. Were you using them to follow through on sounds you were hearing in your head, or was it more of a discovery process?
I prefer to analyse what I hear in my head and try to achieve that sound with the tools that I have available. In the end your DAW and all your plug-ins should only ever be a means to an end, nothing else. When I started studying audio engineering I often followed the ‘trial and error’ approach (especially with EQ) and I was rarely happy with the results. I found out quickly that stopping for a second to think what you need to do before you twiddle with a control always gives you a better result – usually in less time.
Of course – especially in sound design – a lot of experimentation is needed to create new, refreshing soundscapes that don’t sound like they come out of a sample library. But that experimentation should – at least in my humble opinion – never end up in endless plug-in twiddling without knowing where one wants to go.
I agree, it’s better to have an idea of where you want to go if your working on a project. I have to admit though, that I do at times like to experiment for the sake of experimentation. Shifting topics a little, you also said in that previously mentioned e-mail that you were impressed by some of the more sci-fi oriented sounds. Can you name a few that really stood out in your mind, and describe what it is about them that appeals to you?
I agree that experimentation is necessary to explore and fuel your creativity. Where would we be without the beautiful sound designs of Ben Burtt who created much of his ‘Star Wars’ sounds by experimenting? That brings me straight away to the first sound that stood out for me – Bob’s lightsabre. For me the closeness to the Burtt’s sound was fascinating – and the fact that it was created in a completely different way to the original.
Then there were Marcel’s sounds – especially ‘the call’ and the ‘sci fi gun’. He used the metal table – a sound that I found particularly difficult to manipulate, so I was very interested what he made out of it. The ‘Sci Fi Gun’ to me stands out for its realism while ‘The Call’ had a great eerie musical touch to it – I can hear it applied in upcoming horror blockbusters!
There were some outstanding entries in October’s challenge, and many that are worth listening to that didn’t get selected as a finalist. I’m hoping people took the time (or will) to listen to some of those as well. Is there anything else you’d like to add before we finish?
Oh yes, I do remember great sounds from the entries that didn’t make it to the finals. There was a ’5th Element’ type space taxi somewhere which I liked a lot!
To round up it’s time to say a big thanks to you for putting this challenge together every month. I hope it gains more and more popularity since it’s a brilliant way to exchange ideas and approaches to sound design. Thanks as well to the judges for taking the time to listen through the entries, and of course to everyone who voted for me. This particular challenge gave me some great insights as to how far you can go with careful thinking about the processes you use – I’m surely looking forward to the next ones!
Thanks for the kind words.