The Value Of Getting a Good Mix Before Mastering

Today, we are going to discuss the importance of the mix process and all the things that should be considered before and after printing the final mix of your song. After 16 years in business and thousands and thousands of satisfied clients, these are the consistent issues we face when mastering as well as mixing for clients.

In an optimal world, once music has been recorded, edited, and mixed, the final result would simply be committed to vinyl, cd or uploaded as an mp3 ready for online distribution. Nowadays, this still happens in a limited fashion, however, it is becoming more and more uncommon to do so. Being such, people who are very adept and competent mix engineers still welcome the crisp ears of the mastering engineer and final quality control before the music is released to the general consumer. However, there does seem to be a misrepresented understanding of what the actual mastering process is and what it really does. One thing is for sure, mastering a song is certainly not like mixing a song in any way. With that, the mix process is arguably the most important stage of all audio production processes.

Proper music mastering very much relies on the quality of the mix down. At this point it is imperative that the overall balance of the mix has been executed and shaped to the best of the mixers ability. All the sophisticated dynamic interplay between musicians and vocalist/singer should have been well controlled to present a clear and direct sound that expresses the overall message of the song. Once the stereo 2 track mix has been finished and rendered, it is in the mastering stage where every last bit of awesomeness is procured from the mix. This could likely involve adding things like air, clarity, depth, perceived volume, punch, sheen and warmth as well as making tweaks to the overall stereo image. Any and all of these processes when added properly can very much upgrade and enhance the final result of the mixdown. However, mastering is usually not capable of adjusting mix balances more than 0.5dB -1dB without recognizable adverse effects. This would be undertaken using regular, multi-band, parallel, or side-chain compression techniques, equalization or gain/attenuation. Therefore, it is utterly crucial that the main balances of the mix are spot on so the mastering engineer can make the exquisite, cumulative changes which add up to a bigger overall improvement to the audio. Employing a highly recommended professional mix engineer for your project would ensure that your mix down was as good as it could be.

Nowadays, most mastering engineers have a complete understanding of the shrunken budgets that are experienced in the spinning blue ball of music production. Many professional mastering engineers will offer an extra pair of fresh ears over a mix. They will provide guidance or suggest changes in the mix to ensure the best quality and sound which in turn, produces better mastering results. This will aid the artist, band, musicians or producer and the mastering engineer through better end results. In most circumstances, the mastering engineer will implement this service on the basis that the job is moving forward. Mixdowns come in many different sizes and shapes these days, and they are usually dependent on the genre or style of music. It would be quite ordinary for the mastering engineer to advocate that any very obvious issues be adjusted in the mix. A judgment is first made on the mix quality to understand how much of an assessment, if any, should be given by the mastering engineer. Also, can the person who mixed the song truly understand the advice and hear sound well enough to make the proper changes advocated. We say this because there isn’t really any point in creating a situation that an amateur mix engineer can’t work out through lack of the right equipment or experience. Anyhow, when the mastering engineer offers this advice, the artist/musician/producer should always save a copy of their original mix down in case things get a little wacky when changing and rebalancing the mix.

Below is a list of 11 different things that considerably help when preparing your mixdown for mastering, as well as just having a great mix.

11 TIPS FOR PREPARING MIX FOR MASTERING

1) Come to the conclusion that you are 100% in love with the mix or mixes. The mastering process typically isn’t going to make an amateur or rotten sounding mix come to life. The more in love you are with the finished mix, the more in love you will be with the finalized mastered version.

2) Take off any maximizers, limiters or other plugins on the master channel that are there to louden things up. Every now and then, mix engineers will quickly use some kind of clipping/limiting plugin on the master channel for the purpose of mix approvals, as well as to hear how the finished mix might potentially behave during mastering treatment. In these circumstances, this is completely okay, but always be prepared to remove these plugins or gear before sending the final mix away for mastering. This is the mastering engineer’s job after all. They deal with loudness/clipping/limiting issues after applying the eq, compression, and noise reduction treatments that may be needed during the process of mastering. Also mixes that are limited or maximized will not leave proper headroom or average peak level for the mastering engineer. Without good headroom, not much can be done without doing more harm than good to the mix. Also, mixes that have no headroom or a bad average peak level can make it hard to use analog equipment that high level mastering engineers are sought out for. In regards to other plugins on the master fader, if you are not 100% in love with what it is actually doing, just remove it.

3) Stay away from peak levels reaching 0dBFS. We observe all kinds of ideas and opinions on the inter web about where to maintain your peak levels pre-mastering. There really isn’t any magic number for what that peak level should be before mastering. Just have it firmly burned into your brain that the level is not 0dBFS.

This concern is largely pointed towards those individuals who mix “in the box”. If you are mixing on an analog console and rendering a stereo mix back to a digital audio interface of some kind, it’s actually much easier. Just don’t clip the input on the way back to the interface, and definitely do not apply any further digital processing, unless it is for fx or post production purposes. When you export the newly captured stereo mix, make sure you export it at the native sample rate and bit depth of the session, unless you are mixing from analog or digital tape. The mastering engineer can take care of the rest.

4) The bounce or render of your mix should be done at the same sample rate and bit depth as the mix session in your software. Let the mastering engineer take care of any bit depth or sample rate conversions as the high level ones usually work using top of the line converters. There really isn’t a perfect bit depth/sample rate to record or work at, but it is best to not augment the bit depth/sample rate when you make the final render of your mix mastering. Your DAW’s SRC is probably not the greatest, so don’t be discouraged and just don’t use it.

5) Its super beneficial to get into the habit of having a separate analytical listening session before sending your project off for mastering. Attentively listen to the beginning and end of the song or songs going to mastering for any potential stray clicks, pops, noises and irregularities. Get rid of anything you do not want, but do it carefully.

6) In solo mode, check out each of the vocal tracks that are in your mix to double check for any missed garbage like clicks, pops, thumps, plosives, headphone bleed, etc. When all the tracks in the song are playing together, it might be tough to hear these anomalies in the mix. In our experience, vocals frequently are the cause of most undesirable noises in a track. Also check for things like bad edits or crossfades that could potentially cause noise and clicks/pops. These irregularities aren’t always easy to hear prior to mastering in the framework of a full mix when the mix engineer’s attention is focused on more important things like balance and fx. But after mastering, this garbage can become more noticeable and unnaturally transparent. In a mastering room with a high end playback system and low noise floor, these noises and such are much much easier to hear, take our word for it. It’s commonplace for us to edit out a few random noises and clicks/pops when even mastering music mixed by some of the best mix engineers out there. At the end of the day, it’s not difficult to edit these things out in mastering, but it is always welcomed when it is minimal.

7) If you think there is any kind of chance that an instrumental, performance, radio or other versions of the songs will be needed, make them right away when you print the final original mix. It’s not difficult to master these different mix versions at the same time the main mixes are mastered. Going back to master these different versions on a later date can become a lot more costly and time consuming, especially if the mastering DAW session wasn’t saved or settings weren’t documented on the gear used. In addition, in situations where analog gear is used during the mastering process, doing all of these versions at the same time ensures much better continuity between the main, instrumental, performance, radio or other versions.

8) Always leave time before or after the mix to include any potentially precarious noise (buzz/hiss/hum/room tone). Keeping the noise floor/fingerprint in the mix allows the mastering engineer, if needed, to apply transparent noise reduction. Hastily fading out the noise, or editing the beginning and ending of a song very tight usually hinders any transparent noise reduction from being dealt with by the mastering engineer. Cutting of the heads and tails of the song during mastering takes hardly any time at all, but having some of that noise floor to work with can be very useful. Trimming the heads and tails during mastering doesn’t take much time at all, but leaving some of that noise floor for the mastering engineer to work with can be very useful. So just leave it in. Occasionally we will use a small amount of noise reduction on just the very beginning or very end of a song as the last note rings out, but not on the entire song. What helps the most is having a little noise to sample from.

9) Additionally, make sure the timeline selection of the mix you are printing doesn’t cut off any information at the start or end of the song. This can happen when the mix involves a lot of DSP intensive plugins which have a tendency to delay the audio slightly. This is another reason why its always a good idea to leave more space on both ends to be safe. Trimming is easy for the mastering but finding a problem like this at the 11th hour is not ideal and definitely not easy to correct if the problem is extreme.

10) Give the printed mix files a name that is easy to understand. Here at Studio 11, we always avoid using the word FINAL when naming files. Also, using dates or times in the file name can also become confusing to look at as everybody has a slightly different dating system. Digital files these days have time/date stamps anyway, so we always prefer a simple V1, V2, V3, etc or even just 1, 2, 3 etc. for every version. Overall it’s simpler and much easier to deal with.

11) Lastly, if there is only one thing you take away from this information, it is this and coincides with what we have been saying:

LISTEN TO THE FINAL PRINT OF THE MIX TO ENSURE A PLUGIN OR DAW GLITCH DID NOT OCCUR BEFORE DELIVERY TO MASTERING.

These kind of things happen way too often: We will deliver a master back to a client, only for them to discover that a certain plugin in their mix session had a problem that did not occur during playback, but managed to happen during rendering/bouncing and wasn’t double checked. If the clients would have listened to the actual file after the fact, it would have been an simple to fix. Catching it after mastering has been completed can be a gigantic problem, especially if your mastering engineer used analog gear as many do.

Organizing and Consolidating Prior to the Mixdown

So you’ve done it. You made, or received your first song with over 50 tracks, and it’s ready to be mixed down. You have 30 drum channels, 10 synths, 10 vocals,  2 bass tracks, and 10 SFX/ambient tracks. Where do you start?

A big part of mixing that often gets over looked is the part where you are supposed to enjoy it (it’s why your in this business in the first place right?). Mixing should never be frustrating, and should always keep moving while the juices are flowing. Because lets face it, it’s no fun if you are spending more time looking for a sound in a sea of tracks, then actually mixing the song.  So it’s wise to first organize them in some sort of order to avoid wasting time and confusion.

Track Organization:

Everyone has their own process, but it seems commonplace that engineers (including myself) usually start with drums on the top of the session and proceed down from there. Traditionally it will look something like this (from top to bottom) : Kicks, Snares, (Claps), Hi Hats, Toms, (Overheads), Cymbals, Percussion, Bass. The rest of the tracks such as guitars, strings, synths, piano, and vocals tend to vary more on personal preference. I tend to arrange my tracks based on the order I’m going to mix them in. Drums typically get treated first (because drums are the backbone of a song and need to sound good first), then I proceed to Bass to make it sit right with the kick drum, then melody/ or vocals. Again there is no correct way to do this, (in fact I know some engineers that start with vocals first and carve around that because they deem that the most important part of the song).   So your order doesn’t have to be exactly this but it helps if you have a formula with all your sessions, so that after a while you can identify the location of a track without even thinking about it. Next, clearly label the track, and Id even recommend color coding them for easier identification (most DAWS will let you do this). And similar to keeping a consistent order of the tracks from session to session, it’s also a good idea to keep a consistent color for each instrument group (For example, my drums are always red, instruments green, and vocals yellow.)

So now you have the order of the tracks, but you still have over 50 tracks to keep count of and work together. First, take a look at what you have and decide if the song actually needs all of those tracks. For example, do you really need 3 layered hi hats? Is it important to the song? If it’s your song, then take a listen and  strip down what you don’t need ( however,  if it’s for a client be wary of deleting tracks without asking them first.) If you explain to them that it’s not adding anything to the track and gets in the way of other sounds, then chances are they won’t mind if you get rid of them. In mixing less is always more, which is why usually my next step is to combine similar sounds together via a bus to a single mono or stereo track, grouping them together, or assigning them to the same output to create a submix.  Before doing this, it is important to listen for sounds that sit in the same frequency range that can be processed similarly. For instance, it would be unwise to group together a bass and vocal track, because they are going to be processed quite differently during the mix down. The first thing that should be consolidated into a single track would be all channels in your DAW containing overdubs of the same instrument. So if you have 3 guitar tracks, recorded with the same, or similar tone, with the same mic etc, level them together then consolidate that to one track .Next look for tracks like similar background vocal harmonies, hi hats, similar sounding percussion (bongos, congos) and consolidate them.

They key here is to reduce the track count , and stay organized as much as possible so you can focus more on mixing, and not finding. Some engineers may prefer to have the most available options for the mix down, keep everything separate, and not worry as much about the organization of the session, which is fine. Whatever works best for you, is the best way. But unless you are working on a 50+ channel console or control surface, and have a photographic memory, staying organized and consolidating tracks to keep the track count down is less time consuming, visually easier, and will lead to a more efficient mixdown.

Dan Zorn, Engineer

Studio 11 Chicago

209 West Lake Street

For inquiries about scheduling a tour, or booking time call us at 312 372 4460, or drop us a line at studio11chicago@gmail.com

Mixing at Low and High Volumes

If you are an audio engineer, or aspiring audio engineer, you’ve probably heard that it’s best to mix at low volumes. This is because according to the Fletcher-Munson curve, our sensitivity to volume changes at different frequencies. Generally speaking we can hear speech frequencies (2-5khz) over low or very high frequencies at the same dB, and the louder the mix gets, the less of a subjective difference there is between these ranges.

005-Fletcher-Munson-Kurve

Figure A: Here is the Fletchur-Munson curve. If you look at 50 DB , for example, the volume needs to be much higher than 50db in the lower and high frequencies in order to match the perceived same volume at around 1khz. 

When mixing at a low volume, you are eliminating the “allusion” that the frequency spectrum is balanced which is what we perceive when the volume is loud (notice the compression or flattening out when you get to high dB’s on Figure A). For example you may think that the kick drum sounds tight and punchy next to the bass when the volume is loud, but when you turn the volume down you realize that kick hasn’ been processed enough and becomes lost in the mix. So making judgments about arrangement, EQ, and compression are much better to make at low volumes. So if mixing at low volumes seems to reveal more accurate results, why is this article called mixing and low and high volumes! Well there is one thing that is difficult to hear at low volumes: sibilance. If you aren’t familiar with sibilance , basically sibilance is the result of exaggerated “s” or “ch” or “sh” sounds from a vocalist, which causes the frequency response to peak anywhere from 4-10Khz (sometimes higher). A highly sibilant vocal may look something like this on an Frequency analyzer :

SIBBB

Figure B : A noticeable sibilant spike around 8-10Khz. 

Hearing these sounds throughout a song sounds very harsh and fatiguing to the ears after time, so at all times must be controlled or tamed in some way. However at a low volume, not many things sound fatiguing or harsh to our ears. You can listen to a 5 Khz tone for hours at a low volume, but on a loud system will drive you nuts after seconds. So when listening for harshness, turn up those speakers, and start de-essing, and eqing away!

Dan Zorn, Engineer

Studio 11 Chicago

209 West Lake Street

312 372 4460

Studio11chicago@gmail.com

Taming Vocals Without Compression

During any given session, there is a good chance you will experience times where the artist gets really quiet, and parts when they get really loud. While dynamics are good to have in a song, having too much can hinder the mix and result in parts getting lost, or parts being too loud. Often times when this situation arises,  new engineers slap a compressor on the channel to tame some of the peaks and bring up some of the quiet parts. However, more times than not, this can do more harm than good because it doesn’t work for all the parts on the channel.  I’m going to show you a way to fix this problem without relaying solely on compression to do it for you.

When recording vocals, it’s common for the vocalist to move around , whisper, sing, rap, yell, all within the same take. If you are constantly getting overly dynamic recordings from quiet to loud to clipping, first take the time address it at the source. Start by giving your client a heads up to try and stay within a certain distance from the microphone for the best quality. If you mention that it will have a better result, chances are they will have no problem trying to keep their distance in check. Now, of course they still will move around, and things will still sound overly dynamic, but if it’s even 10% better than before, it helps.

Secondly, check the settings on your preamp. Most preamps have a built in compressor in them. “But wait, I thought you said taming without compression!”. Well compression on the front end before being converted into your DAW, not only becomes part of the recording chain before mixing, but can avoid clipping, and work wonders when it comes time to mix the song. Use compression at the source, not when it’s too late. 

Here at Studio 11, we have 2 main preamps that we use for vocals. The Manley Vox Box, and the Drawmer 1969 Mercenary Edition tube-preamp. Both have compressors built into them, and as a result become an integral part of the front end chain. If your preamp doesn’t have a compressor on it, look into putting one in your chain (but be careful to get one without too much coloration or you can do more harm than good). However if you do have a compressor, a gentle ratio, a fast attack with a few dB of gain reduction can work wonders. The fast attack will grab those loud transients to avoid clipping, tame some of the mid sized transients, and boost up of the quiet parts. If all is done correctly, you will end up with a hotter signal for your A/D converter (hotter the better for conversion), and a more solid, thicker looking waveform. But there will still be parts that are too quiet. As I mentioned before, the easy (and lazy fix) is to slap a compressor on there to bring out some of those softer parts. The problem with doing this is that it may work for some of the sound,  but when there are very loud parts, its going to sound very audibly compressed and lifeless (which you don’t want).  It’s a good rule of thumb to try to always use compression as a tool, not an effect. In most cases compression should be mostly translucent and shouldn’t take you away from the performance of the vocals. Of course there are some cases where over compressing can be used as an effect (think “All In” setting on the Universal Audio 1176), but most cases subtlety is best.

So instead of jumping right to the compressor, bring out that simple Gain plug in that you forgot existed the Audiosuite in Pro Tools. (or if your in Ableton Live, break the clip, and adjust volume of the section manually). Go through the track , using your ears and looking at the size of the waveforms, bring up those quieter parts so that they are on the same level as the rest. Now of course I’m not saying that the entire vocal track should be equally as loud, because that would be boring and dynamicless, but put them on a similar volume plane so they are always audible.  Keep in mind there will be parts that are supposed to be quieter ( think intros, bridges, outros ), but the parts that are in the middle of phrases , i.e words that were sung quietly when they moved their head from the mic, or maybe a word that was much louder than the other parts, can be gained up or down accordingly. 

If you prefer using a smoother analogous volume adjustment (instead of gaining sections), reach for that volume automation. Riding the volume to level out quiet parts can lead to a very rewarding result in the end, and will certainly be better than slapping a compressor to “fix” those spots. Once you fixed everything and everything is leveled out and audible, then you can put a compressor  with a gentle ratio to glue it even further (if you even need to), or to bring out quiet transients that you couldn’t access via that gain tool or automation. Following these steps will help your vocals feel more present in the mix, and will avoid that dreadful “overcompressed” sound that you hear so often. Just try it out, play around with the front end compression, gain, and volume automation and you’ll be well on your way to taming those vocals , without sucking the life out of them. 

Dan Zorn, Engineer

Studio 11, Chicago IL

For more information about our services, send an email to Studio11Chicago@gmail.com, or contact us directly at 312 372 4460

How To Make Your Digital Tracks Sound Analog… Using Digital

Let’s face it, with the convenience and quality of modern DAWS , plug ins, and virtual instruments, it’s hard to justify spending a fortune on physical analog gear. Digital (computer) processing has become so good that using the right plug ins and techniques, can yield a surprising (and convincing) analog sound . I’m going to show you just a few ways to bring the pleasing qualities of analog in that sterile digital recording.  


 

Adding Warmth: Analog Modeling plug-ins, EQ, De-essing

One way to add warmth to your digital sounds is running them through some analog modeling plug ins. Instead of grabbing that stock Pro Tools compressor, try using plug ins like the CLA2A from Waves (modeled after the infamous LA2A compressor), or the SSL bus compressor. You’ll find these compressors will react a little differently than a stock digital compressor, and tend to have more coloration, more natural saturation, and a bit more noise (all characteristic of analog) to add to the signal. For EQ, Waves also offers an API parametric EQ modeled after the modules in their analog consoles that sound great. Actual analog EQ will add harmonic distortion (more on this later) simply because they are slightly non linear, where digital EQ can introduce harmonics not related to the fundamental which sound unnatural when pushed hard.  But you’ll notice when using the digital versions of the EQ, in particular the API EQ, that they took the non linear harmonic distortion of the actual unit into consideration when designing it, and as a result more closely resembles an “analog” sound. Another simple way to warm up a track is by subtractively EQing some high end content . Rolling off, or reducing some unwanted highs (specifically the harsh 4khz -8khz range), can add a smoothness to your track that analog processing naturally gives. Also , experiment with de-essing things other than vocals such as cymbals, guitars (renaissance de-esser works great) to have a similar smoothing effect. Take some time, learn the characteristics of each plug in. You will be able to utilize and control them much more when you do.

Adding Harmonics: Harmonic Distortion, Emulated Tape Saturation

A common problem with digital sounds , is that they sometimes lack harmonics that analog naturally adds from it’s physical circuity. Even simply running a sound through an analog console will add pleasing harmonics related to the fundamental pitch by the time it reaches the end of the circuit. There are a couple ways to add harmonics to a track in the digital domain. A go to plug in I use to add a bit of harmonics or grit is the Lo Fi by Waves. Even with just a small amount of the saturation or distortion or bit/sample rate reduction , you can introduce new harmonics to give the sound a ton more analog character. Another favorite of mine is the Kramer Tape emulation. Achieving a similar sound as the Lo Fi, in a slightly different way (tape compression, instead of bit and sample rate reduction), you can add tons of warm harmonics that a real tape machine would reproduce. Adding harmonics in this way can also be referred to as adding distortion to the signal. Some people think of distortion as a negative thing in audio (especially in the digital domain), but when used subtly can not only make the sound more rich and pleasing to the ears, but it can increase the subjective loudness of it as well (by shifting the perceived frequencies closer to our sensitive hearing range). …So if you want to make your mixes appear louder…hint hint.

Increasing Noise: Raising the Noisefloor

In a modern digital system the noisefloor is pretty much close to complete silence. This can be looked at as a good thing if you want crystal clean sounds, but one of the pleasing qualities of analog is that it is not clean but in fact a little dirty. So bring up a sample or generate a sound of white noise, vinyl noise, or background hum, and put it in the background of the song so that it is just barely audible. The way I usually add noise is through a free plug in by iZotope called Vinyl. Just put vinyl on a separate track (so you can have better control over it), increase the “mechanical noise”, roll off some of the low frequencies with an EQ, and have it sit quietly in the background. Aside from the noise sounding aesthetically pleasing, and adding to the overall harmonic content of a song (increasing subjective fullness), it also fills in gaps of a song that cut out to silence (drops, breaks). Hearing complete silence in a digital domain really just tends to sound strange to our ears, especially after or before a full spectrum of sound. This weirdness is most noticed on headphones  where your ears are blocked off from outside noise (giving the impression that for a split second you think your wearing ear plugs!) We like a noise floor and hear it every second of every day. It not only adds an analog quality,  but it also adds a very desirable human element.


 

These are the basics of how to make your digital tracks sound more analog using digital plug ins. There are even more things you can do (adding tape wow and flutter in some cases for example), but these are good starting points to bring a little more analog flavor into your digital tracks. Play around with the different combinations, listen to analog recordings, and try to mimic them, you’ll find that you had the tools to do it this whole time.

 

Dan Zorn, Engineer

Studio 11 Chicago

 

For rates on recording, mixing, or mastering or for general questions please send us email at Studio11chicago@gmail.com , or contact us directly at (312) 372-4460.

 

 

MIXING HIP HOP AND RAP IN CHICAGO

A good mix of a song is what helps the listener better connect to a piece of music and can have a dramatic impact on its overall success. While again, there are no wrong ways to mix a song, there are certain mix philosophies and methods that are common among many good mix engineers and styles of music.
css.php