Panning is a very important aspect of sound design, yet it is often mislabeled as a task for the mixing engineer. At the mixing stage, these professionals typically only concern themselves with placement and automation of the output (pan dials). All things considered the reality is that only you, the designer, know how the SOURCE should be moving. Modern digital effects and instruments have the capability to move about the image at a set rate, in a certain periodic or nonperiodic contour – controlled by a modulator or an envelope.

There is no denying that this is a breakthrough technology and is evident in just about every electronic production these days. But, this is not the only way to create larger-than-life source movement. For example, you could: render a sub-mix of duplicated and hocketed outputs (See Blog: Hocketing and Pointillism) panned to several different positions, use an instruction based sequencer like Csound, or use various delay times to simulate changes in the source’s distance from the surfaces in a room. In this tutorial, I will walk you through one method to provide speedy, alien-like movement to your sounds.

This is what my flying FX sound like….




Change is movement and movement is change. In this world, nothing can move without changing – so… why should we pan electronic sounds without changing the source?

ILDs and ITDs – Humans are remarkably talented locators. We know where sounds are coming from because of ILDs (interaural level differences) and ITDs (interaural time differences). Simply put, it is either a difference in level between our ears or a difference in arrival time to each ear that give away a sound source’s location. This is why phasers and flangers seem to make sounds move around – they slightly delay a copy of the signal and pan it. Lower frequencies tend to be resolved using ITDs and higher frequencies tend to be resolved using ILDs.

Behind, In Front, Above, Below – Making sounds seem like they are behind or above you is difficult and can really only be accomplished using binaural recording or surround sound systems. However, if you think about diffraction (sounds bending around obstacles) and the shape of our outer ears for a moment you may realize this…. Our outer ears are the main reason we can tell when something is behind or in front of us. This goes for sound from above as well. Both the unique effect of our outer ear and the widened reflections tell us that something is above. It is difficult to research sound coming from below us – that is, after all, an alien concept. Can you think of any situation where sound would be coming from DIRECTLY below you? Maybe while skydiving, or swimming….

The Doppler Effect – Sounds seem to change pitch when they move at a significant speed. The deeper the pitch modulation, the faster the source will seem to be moving as you move its pan position.




These three parameters all change to some degree when a sound is in motion. It takes a reflective room to notice delay time changes and a significant speed to notice the Doppler Effect; but, subconsciously, we are always using this information to localize sound. In my example, I set up an LFO modulating the pan position on a few variations of two similar samples, a little bit of pitch enveloping, and multiple mono delays with different delay times and automated wetness. Avoid automating or modulating delay times because we don’t want that characteristic tape emulation – you know, that dramatic “whip” in pitch.

Sample Layout

pic - battery with samples

Pan Modulation

pic - Modulation window

Pitch Envelopes

pic - pitch envelopes

Serial Delay

pic - delay 1

Parallel Delay (Pre-Mixer Return)

pic - delay 2


So, this is what my arrangement of affected and processed textures sound like…


Tune in to future blog posts for more on simulating source movement, psycoacoustics, and audio processing!