Sample Based Hip Hop & House

When it comes to the roots of Hip-Hop, Rap and House Music production, one of the most contentious and important elements to these 3 styles is the technique known as sampling. Since the beginning, the craft of using a recording from one context and reconstructing it into another context has been one of the basic fundamentals to the Hip-Hop and House sound. It still defines the sound just as much so today. When it comes to mixing these kind of records, the mix really depends on the samples themselves and what effect they produce inside the song. Are the samples being used as the main melody/rhythm section, or are they being used to compliment the melody or subsequent rhythm section?

To understand how to mix sample-based Hip-Hop, House and other forms of sample-based music, we must first understand what sampling really is. It is generally defined as: 

The technique of digitally encoding music or sound and reusing it as part of a composition. 

Sampling pretty much falls into 2 different categories known as ‘Loop Samples’ and ‘Chop Samples’. For the context of this article, we will just abbreviate them to ‘Loops’ and ‘Chops’. ‘Loops’ are basically made either out of breaks (sections where the music is briefly incomplete) or made out of snippets of whole records and musical compositions (think 2 Live Crew, Dr.Dre, Puff Daddy produced records from the 90’s). In the former, space is given in the arrangement where other musical elements can be placed. In the latter, you are working with a full arrangement. 

Chops are temporary audio slices of a record or sound source that can be copied, cut, pasted, rearranged, tweaked and re-tweaked. They can range from a drum hit (kick, snare, percussion, cymbals), to an individual instrument note or chord, and even to a vocal word, phrase, or melody. Chops are even cut up and rearranged into new loops that can become entire compositions or open to additional production and loops. 

Because the potential arrangements one may be dealing with can be very particular, the mix of these kind of records generally tends to begin from a different place. A starting philosophy can be thinking about mixing what isn’t there in the song instead of thinking about mixing what is there. Let me go into a little more detail here.

If you have a sample loop track, a kick track, a snare track, and a hat track or tracks pulled up on your mixer, you notice what you don’t have is the percussion tracks, bass track or tracks, and any tracks of melodic instrumentation. All of those missing elements need to be acquired from the sample itself. Quite often when mixing these kind of records, I first find myself carefully processing the sample to bring out the bass line information as much as possible. Most of the time, this is done through surgical processing like equalization, such as filtering and notch gaining. 

Other times, you may have to tighten up the dynamics of the bass information by using various forms of straight or multi-band compression before you EQ. Sometimes its better to use this compression after you EQ. I would say though, if and when you can avoid compression, avoid it. 

Lastly, you might even have to consider adding some kind of sub-harmonic synthesis to get the bass to sit properly with the other low end information in the mix, which would most likely be coming from the kick and 808 (if there is one). That doesn’t always mean adding low sub tones to the sample, sometimes it means creating that upper bass register tone that doesn’t necessarily exist in the sample’s bass information. 

If there’s also a key element or elements in the sample such as a guitar riff, string line, or multi instrument melody, I will do everything I can to bring that information out as well because those elements are probably what convinced the producer to use the sample in the first place.

To make things harder, occasionally the sample doesn’t always provide enough elements or information from the elements to fill out the overall sound of the arrangement. To rectify this problem, if possible (I say this because sometimes its not possible due to problems outside of your control inside the sample, such as performance and tuning), recreate the missing information by replaying and recording the missing elements on top of the sample itself. This doesn’t always work though, as it can sometimes change the overall aesthetic of what the sample brings to the piece of music. Sometimes the whole point is for the sample to sound broken and disheveled, not smooth and polished.

Now the other side to mixing sample-based Hip-Hop & House arrangements comes from the drums. To all you producers out there, drum selection should be the highest of priorities. Come on, its Hip-Hop!! The sound of the drums are of the most absolute importance, not just for the overall prosperity of the record, but also in defining the overall producer’s sound. 

When a client sends in a Hip-Hop song for me to mix, especially from a producer who is very sample-centric, my main goal is to hopefully change the sound of the drums as little as possible. I do this not because I’m a lazy bastard, but because the sound of the drums is what provides the producer his signature and messing with it too much can throw off the vibe the drums give with the sample. 

Now, I’m not saying I haven’t resuscitated my fair share of drum tracks or even upright replaced every drum sound with a different drum sound, but my overall goal at first is to respect whats going on there with the drums. Hopefully, the only processing I’ll do is to make sure the drums perfectly compliment the sample when the two play together in the mix.

Another important consideration when mixing sample-based Hip-Hop and music is, “match the space.” For example, if the source sample sounds like it was recorded to a tape machine and mixed on an analog console, and the drum sounds are coming from some kind of drum machine, synth plugin, or stock drum sample pack, chances are the context of the two together won’t make a whole lot of sense. When mixing, I’m always considering what I can do to make the drums relate better to the sample, but still hit hard and poke out in the mix. This might involve processing to the drums and processing to the sample, or even both. Generally, this may involve some bit of experimentation as everything depends on the way the sample and drums sound together. 

Often times, its key to “match the space.” It isn’t the easiest thing in the world; it takes a trained seasoned ear to really get it right, or a bit of luck if you’re new to the game, lol. But, you can train yourself when listening to the ambience in the sample and learn how to recreate that specific ambience around the drums. 

On the other side of things, a producer may use a sample in a more complex musical arrangement. Bass, drums, percussion, main melodic instrumentation, lead instrumentation, a sample or even multiple samples. It’s all about making all the elements as close-knit as possible and narrowing the sample down to the key information or instrumentation inside of it. In a simple arrangement of just the drums and a sample, I might focus on pushing up the bass information in the sample. But in a more complex arrangement, I’ll generally be trying to remove the low end information of the sample by means of subtractive attenuation or high pass filtering.  

One of the most difficult situations that comes up is when the song is composed of multiple, overlapping samples. A drum break for the groove, a melodic sample or samples, a vocal sample, and even a percussion loop, quantized or time stretched, and shaped to work together form the backbone sound of the record. This is a difficult sound to properly produce as it requires many small edits and decisive pitch shaping to get the tuning and groove right. Sometimes, things just work better when they are slightly off. 

From the engineering perspective, this is one area where it is really really important to be a fan of Hip-Hop or the sample based music style you are mixing. It’s a difficult call deciding if and what elements need to be tight and together or loose with a natural swing feel. Having that natural love and appreciation for Hip-Hop goes a long way when it comes to mixing this specific style. 

After that, the mixing isn’t that easy either. Usually taken from completely different genres of music, audio sources, and time periods, the overall key to getting this Hip-Hop style to work is making all the samples sound together. Regrettably, there are too many variables involved for me to provide a clear cut, step by step solution for mixing in this kind of situation, but I gotta say, equalization is gonna be your daily fuck buddy here. 

All in all, the key to mixing sample-based production is understanding what’s going on in the arrangement of the record. I’m not talking about arrangement in terms of intro, verse’s, hooks, bridges, and outro. I’m talking about arrangement in terms of what each track/stem inside the mix is doing. If the arrangement you are mixing is simple and sparse, use the sample to fill up that missing space. If the arrangement is real dense, just get down to the core of what the sample is doing inside the song. 

In conclusion, if you keep those drums pimpin‘, vocals clear and present, and low end super heavy, you’ll always be the boss with the sauce when it comes to mixing sample based Hip-Hop. You might even make a gangsta cry, I have.  

Kris Anderson/Senior Engineer

If you are looking for recording studios in Chicago, give us a ring at (312)372-4460

Studio 11

345 N.Loomis St., Chicago, Illinois

Organizing and Consolidating Prior to the Mixdown

So you’ve done it. You made, or received your first song with over 50 tracks, and it’s ready to be mixed down. You have 30 drum channels, 10 synths, 10 vocals,  2 bass tracks, and 10 SFX/ambient tracks. Where do you start?

A big part of mixing that often gets over looked is the part where you are supposed to enjoy it (it’s why your in this business in the first place right?). Mixing should never be frustrating, and should always keep moving while the juices are flowing. Because lets face it, it’s no fun if you are spending more time looking for a sound in a sea of tracks, then actually mixing the song.  So it’s wise to first organize them in some sort of order to avoid wasting time and confusion.

Track Organization:

Everyone has their own process, but it seems commonplace that engineers (including myself) usually start with drums on the top of the session and proceed down from there. Traditionally it will look something like this (from top to bottom) : Kicks, Snares, (Claps), Hi Hats, Toms, (Overheads), Cymbals, Percussion, Bass. The rest of the tracks such as guitars, strings, synths, piano, and vocals tend to vary more on personal preference. I tend to arrange my tracks based on the order I’m going to mix them in. Drums typically get treated first (because drums are the backbone of a song and need to sound good first), then I proceed to Bass to make it sit right with the kick drum, then melody/ or vocals. Again there is no correct way to do this, (in fact I know some engineers that start with vocals first and carve around that because they deem that the most important part of the song).   So your order doesn’t have to be exactly this but it helps if you have a formula with all your sessions, so that after a while you can identify the location of a track without even thinking about it. Next, clearly label the track, and Id even recommend color coding them for easier identification (most DAWS will let you do this). And similar to keeping a consistent order of the tracks from session to session, it’s also a good idea to keep a consistent color for each instrument group (For example, my drums are always red, instruments green, and vocals yellow.)

So now you have the order of the tracks, but you still have over 50 tracks to keep count of and work together. First, take a look at what you have and decide if the song actually needs all of those tracks. For example, do you really need 3 layered hi hats? Is it important to the song? If it’s your song, then take a listen and  strip down what you don’t need ( however,  if it’s for a client be wary of deleting tracks without asking them first.) If you explain to them that it’s not adding anything to the track and gets in the way of other sounds, then chances are they won’t mind if you get rid of them. In mixing less is always more, which is why usually my next step is to combine similar sounds together via a bus to a single mono or stereo track, grouping them together, or assigning them to the same output to create a submix.  Before doing this, it is important to listen for sounds that sit in the same frequency range that can be processed similarly. For instance, it would be unwise to group together a bass and vocal track, because they are going to be processed quite differently during the mix down. The first thing that should be consolidated into a single track would be all channels in your DAW containing overdubs of the same instrument. So if you have 3 guitar tracks, recorded with the same, or similar tone, with the same mic etc, level them together then consolidate that to one track .Next look for tracks like similar background vocal harmonies, hi hats, similar sounding percussion (bongos, congos) and consolidate them.

They key here is to reduce the track count , and stay organized as much as possible so you can focus more on mixing, and not finding. Some engineers may prefer to have the most available options for the mix down, keep everything separate, and not worry as much about the organization of the session, which is fine. Whatever works best for you, is the best way. But unless you are working on a 50+ channel console or control surface, and have a photographic memory, staying organized and consolidating tracks to keep the track count down is less time consuming, visually easier, and will lead to a more efficient mixdown.

Dan Zorn, Engineer

Studio 11 Chicago

209 West Lake Street

For inquiries about scheduling a tour, or booking time call us at 312 372 4460, or drop us a line at studio11chicago@gmail.com

Mixing at Low and High Volumes

If you are an audio engineer, or aspiring audio engineer, you’ve probably heard that it’s best to mix at low volumes. This is because according to the Fletcher-Munson curve, our sensitivity to volume changes at different frequencies. Generally speaking we can hear speech frequencies (2-5khz) over low or very high frequencies at the same dB, and the louder the mix gets, the less of a subjective difference there is between these ranges.

005-Fletcher-Munson-Kurve

Figure A: Here is the Fletchur-Munson curve. If you look at 50 DB , for example, the volume needs to be much higher than 50db in the lower and high frequencies in order to match the perceived same volume at around 1khz. 

When mixing at a low volume, you are eliminating the “allusion” that the frequency spectrum is balanced which is what we perceive when the volume is loud (notice the compression or flattening out when you get to high dB’s on Figure A). For example you may think that the kick drum sounds tight and punchy next to the bass when the volume is loud, but when you turn the volume down you realize that kick hasn’ been processed enough and becomes lost in the mix. So making judgments about arrangement, EQ, and compression are much better to make at low volumes. So if mixing at low volumes seems to reveal more accurate results, why is this article called mixing and low and high volumes! Well there is one thing that is difficult to hear at low volumes: sibilance. If you aren’t familiar with sibilance , basically sibilance is the result of exaggerated “s” or “ch” or “sh” sounds from a vocalist, which causes the frequency response to peak anywhere from 4-10Khz (sometimes higher). A highly sibilant vocal may look something like this on an Frequency analyzer :

SIBBB

Figure B : A noticeable sibilant spike around 8-10Khz. 

Hearing these sounds throughout a song sounds very harsh and fatiguing to the ears after time, so at all times must be controlled or tamed in some way. However at a low volume, not many things sound fatiguing or harsh to our ears. You can listen to a 5 Khz tone for hours at a low volume, but on a loud system will drive you nuts after seconds. So when listening for harshness, turn up those speakers, and start de-essing, and eqing away!

Dan Zorn, Engineer

Studio 11 Chicago

209 West Lake Street

312 372 4460

Studio11chicago@gmail.com

Taming Vocals Without Compression

During any given session, there is a good chance you will experience times where the artist gets really quiet, and parts when they get really loud. While dynamics are good to have in a song, having too much can hinder the mix and result in parts getting lost, or parts being too loud. Often times when this situation arises,  new engineers slap a compressor on the channel to tame some of the peaks and bring up some of the quiet parts. However, more times than not, this can do more harm than good because it doesn’t work for all the parts on the channel.  I’m going to show you a way to fix this problem without relaying solely on compression to do it for you.

When recording vocals, it’s common for the vocalist to move around , whisper, sing, rap, yell, all within the same take. If you are constantly getting overly dynamic recordings from quiet to loud to clipping, first take the time address it at the source. Start by giving your client a heads up to try and stay within a certain distance from the microphone for the best quality. If you mention that it will have a better result, chances are they will have no problem trying to keep their distance in check. Now, of course they still will move around, and things will still sound overly dynamic, but if it’s even 10% better than before, it helps.

Secondly, check the settings on your preamp. Most preamps have a built in compressor in them. “But wait, I thought you said taming without compression!”. Well compression on the front end before being converted into your DAW, not only becomes part of the recording chain before mixing, but can avoid clipping, and work wonders when it comes time to mix the song. Use compression at the source, not when it’s too late. 

Here at Studio 11, we have 2 main preamps that we use for vocals. The Manley Vox Box, and the Drawmer 1969 Mercenary Edition tube-preamp. Both have compressors built into them, and as a result become an integral part of the front end chain. If your preamp doesn’t have a compressor on it, look into putting one in your chain (but be careful to get one without too much coloration or you can do more harm than good). However if you do have a compressor, a gentle ratio, a fast attack with a few dB of gain reduction can work wonders. The fast attack will grab those loud transients to avoid clipping, tame some of the mid sized transients, and boost up of the quiet parts. If all is done correctly, you will end up with a hotter signal for your A/D converter (hotter the better for conversion), and a more solid, thicker looking waveform. But there will still be parts that are too quiet. As I mentioned before, the easy (and lazy fix) is to slap a compressor on there to bring out some of those softer parts. The problem with doing this is that it may work for some of the sound,  but when there are very loud parts, its going to sound very audibly compressed and lifeless (which you don’t want).  It’s a good rule of thumb to try to always use compression as a tool, not an effect. In most cases compression should be mostly translucent and shouldn’t take you away from the performance of the vocals. Of course there are some cases where over compressing can be used as an effect (think “All In” setting on the Universal Audio 1176), but most cases subtlety is best.

So instead of jumping right to the compressor, bring out that simple Gain plug in that you forgot existed the Audiosuite in Pro Tools. (or if your in Ableton Live, break the clip, and adjust volume of the section manually). Go through the track , using your ears and looking at the size of the waveforms, bring up those quieter parts so that they are on the same level as the rest. Now of course I’m not saying that the entire vocal track should be equally as loud, because that would be boring and dynamicless, but put them on a similar volume plane so they are always audible.  Keep in mind there will be parts that are supposed to be quieter ( think intros, bridges, outros ), but the parts that are in the middle of phrases , i.e words that were sung quietly when they moved their head from the mic, or maybe a word that was much louder than the other parts, can be gained up or down accordingly. 

If you prefer using a smoother analogous volume adjustment (instead of gaining sections), reach for that volume automation. Riding the volume to level out quiet parts can lead to a very rewarding result in the end, and will certainly be better than slapping a compressor to “fix” those spots. Once you fixed everything and everything is leveled out and audible, then you can put a compressor  with a gentle ratio to glue it even further (if you even need to), or to bring out quiet transients that you couldn’t access via that gain tool or automation. Following these steps will help your vocals feel more present in the mix, and will avoid that dreadful “overcompressed” sound that you hear so often. Just try it out, play around with the front end compression, gain, and volume automation and you’ll be well on your way to taming those vocals , without sucking the life out of them. 

Dan Zorn, Engineer

Studio 11, Chicago IL

For more information about our services, send an email to Studio11Chicago@gmail.com, or contact us directly at 312 372 4460

How To Make Your Digital Tracks Sound Analog… Using Digital

Let’s face it, with the convenience and quality of modern DAWS , plug ins, and virtual instruments, it’s hard to justify spending a fortune on physical analog gear. Digital (computer) processing has become so good that using the right plug ins and techniques, can yield a surprising (and convincing) analog sound . I’m going to show you just a few ways to bring the pleasing qualities of analog in that sterile digital recording.  


 

Adding Warmth: Analog Modeling plug-ins, EQ, De-essing

One way to add warmth to your digital sounds is running them through some analog modeling plug ins. Instead of grabbing that stock Pro Tools compressor, try using plug ins like the CLA2A from Waves (modeled after the infamous LA2A compressor), or the SSL bus compressor. You’ll find these compressors will react a little differently than a stock digital compressor, and tend to have more coloration, more natural saturation, and a bit more noise (all characteristic of analog) to add to the signal. For EQ, Waves also offers an API parametric EQ modeled after the modules in their analog consoles that sound great. Actual analog EQ will add harmonic distortion (more on this later) simply because they are slightly non linear, where digital EQ can introduce harmonics not related to the fundamental which sound unnatural when pushed hard.  But you’ll notice when using the digital versions of the EQ, in particular the API EQ, that they took the non linear harmonic distortion of the actual unit into consideration when designing it, and as a result more closely resembles an “analog” sound. Another simple way to warm up a track is by subtractively EQing some high end content . Rolling off, or reducing some unwanted highs (specifically the harsh 4khz -8khz range), can add a smoothness to your track that analog processing naturally gives. Also , experiment with de-essing things other than vocals such as cymbals, guitars (renaissance de-esser works great) to have a similar smoothing effect. Take some time, learn the characteristics of each plug in. You will be able to utilize and control them much more when you do.

Adding Harmonics: Harmonic Distortion, Emulated Tape Saturation

A common problem with digital sounds , is that they sometimes lack harmonics that analog naturally adds from it’s physical circuity. Even simply running a sound through an analog console will add pleasing harmonics related to the fundamental pitch by the time it reaches the end of the circuit. There are a couple ways to add harmonics to a track in the digital domain. A go to plug in I use to add a bit of harmonics or grit is the Lo Fi by Waves. Even with just a small amount of the saturation or distortion or bit/sample rate reduction , you can introduce new harmonics to give the sound a ton more analog character. Another favorite of mine is the Kramer Tape emulation. Achieving a similar sound as the Lo Fi, in a slightly different way (tape compression, instead of bit and sample rate reduction), you can add tons of warm harmonics that a real tape machine would reproduce. Adding harmonics in this way can also be referred to as adding distortion to the signal. Some people think of distortion as a negative thing in audio (especially in the digital domain), but when used subtly can not only make the sound more rich and pleasing to the ears, but it can increase the subjective loudness of it as well (by shifting the perceived frequencies closer to our sensitive hearing range). …So if you want to make your mixes appear louder…hint hint.

Increasing Noise: Raising the Noisefloor

In a modern digital system the noisefloor is pretty much close to complete silence. This can be looked at as a good thing if you want crystal clean sounds, but one of the pleasing qualities of analog is that it is not clean but in fact a little dirty. So bring up a sample or generate a sound of white noise, vinyl noise, or background hum, and put it in the background of the song so that it is just barely audible. The way I usually add noise is through a free plug in by iZotope called Vinyl. Just put vinyl on a separate track (so you can have better control over it), increase the “mechanical noise”, roll off some of the low frequencies with an EQ, and have it sit quietly in the background. Aside from the noise sounding aesthetically pleasing, and adding to the overall harmonic content of a song (increasing subjective fullness), it also fills in gaps of a song that cut out to silence (drops, breaks). Hearing complete silence in a digital domain really just tends to sound strange to our ears, especially after or before a full spectrum of sound. This weirdness is most noticed on headphones  where your ears are blocked off from outside noise (giving the impression that for a split second you think your wearing ear plugs!) We like a noise floor and hear it every second of every day. It not only adds an analog quality,  but it also adds a very desirable human element.


 

These are the basics of how to make your digital tracks sound more analog using digital plug ins. There are even more things you can do (adding tape wow and flutter in some cases for example), but these are good starting points to bring a little more analog flavor into your digital tracks. Play around with the different combinations, listen to analog recordings, and try to mimic them, you’ll find that you had the tools to do it this whole time.

 

Dan Zorn, Engineer

Studio 11 Chicago

 

For rates on recording, mixing, or mastering or for general questions please send us email at Studio11chicago@gmail.com , or contact us directly at (312) 372-4460.

 

 

 Page 1 of 2  1  2 »
Book Now
css.php CALL US NOW!