Week 2 - Class 1

In this class, we are going to learn about amplitude, and we also are going to work with two new aspects of the Audition software: the session, and the mixer.

Amplitude

When we look at a waveform display, its 'height' is the amplitude. Most people think of amplitude as volume, but it represents more than that - it also defines the extent of 'events' within an audio composition. Think of any normal sound: in addition to having an overall volume, there is also a starting 'ramp' (although it might be very fast) and an ending ramp (also potentially very fast). We can see these event curves at the start and end of this waveform:


Audition provides a simple tool for working with event start and end ramps: the Fade-in and Fade-out handles. These handles allow use to do 'destructive' changes to the start and end of the waveform, allowing us to change our perception of a sound's onset and release. These ramps are very important to how we respond to a sound; very sharp changes (or transients) will be alarming, while long attack and decay ramps will be more soothing.

Within our editor, overall volume can be changed using the 'Amplify' effect, which allows us to change all of the samples in the file to one volume level. You will also note that you can adjust the left/right balance, which can provide some directionality to the sound. However, this isn't the best place to make directional adjustments, so you should use this for audio clean-up rather than volume adjustment.

The last thing we will look at is the "Normalize' process, also in the Amplification menu. Normalizing is a helper function; it looks at all of the samples in an audio file, finds the loudest one, then uses that to determine how much to increase the amplitude in order to maximize the amplitude of everything. You can think of it as a great big, but relative, volume change across the whole file. This is useful when you get a file (or generate a tone) that just isn't loud enough to work with: using this tool, you can end up with a better starting place.


Audition Sessions

While we've been able to do some composition using Audition, most of that work has been an emulation of the 'tape splicing' techniques I described in class. In fact, the way that all audio composition (and media composition in general) is done uses a timeline interface with multiple lanes - which, in the case of audio compositions, are called 'tracks'.


In Audition language, this timeline/track system is called a session, and is a special type of file that contains references to the other files, as well as location and other information for those files. A loaded session will show all of the audio events as they occur over time, and is a much easier way to 'composite' together a soundscape.


The browser system becomes very important in the session system; this is where you bring in audio files, then drag them onto the session for composition. You can also further edit the file by double-clicking on its name in the media browser.

Audition Session Mixer

Remember how we discussed overall volume and location (panning), but said that the audio editor wasn't the place to do this? We said that because Audition has a system for non-destructive changes to volume and pan position that is much easier to work with. It is the session mixer, and it contains a volume control and pan control for each track in the session:


It is really important to understand the relationship between tracks in the session viewer, and the tracks in the mixer - more issues have cropped up as a result of misunderstanding this function than any other. Each mixer channel affects all of the events on a single track - and only that track. If you adjust the wrong channel, you will alter the wrong track. The easiest way to verify which mixer channel controls which track is to check the NAME of the track/channel: there is one channel per track, and they are configured as such:


Using the session view, session mixer and some sound files, it becomes vastly easier to create a composition that changes (both subtly and significantly) over time. The session view is the key to almost all non-performance composition, and is at the heart of everything we will do this quarter.

In-class Assignment:

Using the Audition session system, create a new session and use at least 4 audio files to create a 30-second composition. Use the mixer to control the volume and pan (location) position to make the sound immersive and engaging. You will play it for me before you leave the classroom.

Assignment Due for Wednesday:

1. Using at least 10 files from previous work, create a 3-minute composition that, to your ears, sounds great! Make sure that you use the mixer to open up the sound.
2. Find and watch two (2) videos about Adobe Audition sessions, the Audition mixer, or some other (Audition-focused) compositing function. Email me (darwin.grosse@gmail.com) of the files that you watched.

Week 2 - Class 2

To this point, we've learned how to create events (Week 1) and how to composite them over a timeline (Week 2, Class 1). The next thing we need to do is learn some creative ways to improve the stagework of the composition.


I use the phrase 'stagework' thoughtfully; if you treat sonic effects as if they were actors on a stage, you will see that certain concepts really work well:

So when you are creating your soundstage, make sure you are paying attention to the location/panning, amplitude/volume and timing of each of the sound bites as if they were individual actors; this will help you achieve a full, but not over-full, production of your composition. But one of the things we can't really do with the information we've already retrieved is to have the actors move on-stage - which is, of course, a critical part of acting!

Movement within the soundscape is done through parameter automation. The automation system in Audition is pretty much identical to the automation systems of other media tools: you have lanes within each track that allow you to change parameters (amplitude, pan position) smoothly over time. There are two kinds of automation: clip automation and track automation.

Track automation is the most often-used: it creates an automation line that is 'over' the whole track, whether or not there are audio events, and even if there are different events on the same track. You get to the track automation by 'revealing' the automation lane, and selecting which types of automation you want to see - and which one is the currently editable automation line.


The only disadvantage of track automation is that it stays in place, even when you move clips around. This is good for maintaining overall volume and panning, but not so good if you have automation moves that are specific to a particular audio clip. If you want to 'attach' automation to a particular clip, you need to use clip automations. These are available on the clip display itself, and is manipulated by hovering over, then clicking on, one of the lines available on the clip. From there, you can create keyframe points, adjust volume and panning, and create unique event structures for each clip in a track.


So, when do you use each of the kinds of automation? It depends on what you are after. In general, the way that most people work is to start off using clip automation while you are laying out the structure of the piece - while you are composing and compositing. Once the structure is in place, you then 'mix' the results together, with overall variations and movement, using track automation. Since you have presumably edited your audio events to be (mostly) as you need them, you will probably have few clip edits and lots of track automation.

In-class Assignment:

Using the Audition session and automation systems, modify your current homework assignment to animate volume and location changes to make the mix even more interesting. I will want to hear this before you leave the class.

Assignment Due for Monday:

1. Using at least 10 files from previous work, create a new 3-minute composition that sounds great! You will want to use automation to create a dynamic and interesting mix.