Indablog // Production Tips - Indaba Music
News, Sessions and oddities from the Indaba Community
Friday May 01, 2009 at 08:00 AM
Entering a Contest on Indaba by Matt P.
For all you new users out there, the contests here at Indaba are a pretty big deal! Right now, there are FOUR contests available for you to enter on Indaba, and we thought it’d be nice to give you guys a little refresher on how to enter. In the past we’ve offered the separated tracks from songs by Mariah Carey, Third Eye Blind, and K-OS to be remixed and re-imagined by you guys, and the winners received incredible prizes. So not only do you get a chance to (virtually) work with incredible artists, but you could also get some cool swag, cash, or placements out of it! Win win? I’d say so.
But before you get hired as John Legend’s in-house remixer, you should probably figure out how to enter Indaba contests. Click on the Contests link under the Community tab on the home page, then select the contest you’d like to enter (Note: the contests that give a number of days ‘Remaining for Entries’ to the right of them, are the ones you can still enter). Near the middle of every contest you should see this button:
If you’re not a member of Indaba Music, not signed in, or you’ve already entered the contest, you should see some variation of this button but if you make sure you click on the blue, rectangular button on the contest page you should be fine. NOTE: If you haven’t entered the contest by clicking this button, you will not be able to download the contest tracks. You may only preview them at the top right of the contest page.
After you click the ‘Enter the Contest’ button, you should see two buttons:
You can always click the ‘Previous’ button at the bottom left of the page if you need to return to this page.
For those of you who clicked on the Use Indaba Session Console button, you can click the ‘Create a New Mix’ link to the top right of the mixdown player to begin your mix.
If you’d like to add tracks to be mixed on the Indaba Session Console, click on the ‘Tracks’ tab and select the ‘Upload File’ link at the top right of the list of tracks available to be mixed. The contest uploader only accepts MP3, 16-bit WAV and AIF formats, so make sure your tracks are all in those formats. Once you’re done with your mix, create a Mixdown in the Indaba Session Console, and it will appear in the mixdown player on the Mix & Submit Entry page. You can preview your mixdown on the player, and if you’re happy with the result, select the Submit Entry button to submit your entry to the contest.
If you selected It’s Cool, I’ll Use My Own, you’ll find the separated tracks to be downloaded. Select the Download button to the right of the tracks or audio pack you want to download.
Remember, downloading files for Contests and/or Featured Sessions does not count towards your monthly bandwidth. Yay!
Once you’ve completed your mix, click the blue Next button at the bottom of the page to get to the Mix & Submit Entry page, and upload your mix. Once you’ve completed your mix, click the blue Next button at the bottom of the page to get to the Mix & Submit Entry page, and upload your mix. Again, the uploader only accepts MP3, 16-bit WAV and AIF formats, so make sure your mix is in one of those formats. Once you upload your mix, you can preview it by clicking the blue play button to the left of your entry. Make sure to press the Submit Entry button so your entry appears in the contest submissions page!
Well that’s basically it, so feel free to go crazy in the contests now that you know everything you need to know to enter!
Thursday April 16, 2009 at 09:52 AM
We were perusing YouTube last night when we came across this video made by Indaba user Jack Hemsworth. Jack took the time to record his musical collaboration on Indaba through the creation of the song, “Lonely Soul”. We’ve already made a set of “official” tutorial videos that you can access from the help page, but its nice to see an Indaba member helping others out by showing the way the he works. We encourage you to spread the knowledge and show the world how you use Indaba!
Monday October 06, 2008 at 04:00 PM
The Art of Mixing (Part 4 of 4) by Josh
This week, I’m going to finish up the series with the last element of a good mix, interest (making the mix special). Check out Part 3 for pan and dimension and dynamics, Part 2 for balance and frequency range, and Part 1 for a general primer on mixing.
Interest is often the most elusive element of a good mix. Oftentimes, mixers fully address the previous five elements (dynamics, frequency, balance, panorama), but the final mix still sounds flat, lifeless, and un-interesting.
A mix needs to have something special that captures the interest of the listener. How many times have you heard a song with relatively simple, straightforward parts, but somehow felt an emotional pull or connection that made you want to listen over and over again? Especially in popular music, the mixer plays a crucial role in creating the emotion and excitement that makes the song a hit.
Creating excitement and interest in a mix is often a tough feat to accomplish. One tip is to view the song as a movie or story. There needs to be moments of tension, a build up that results in an emotional climax, and resolution. Use compressors, volume change, pan, and other tools to create subtle or dramatic rise and falls throughout the song.
Here are several tips that Bobby Owsinski offers in his great book on mixing and the main source for this blog series, The Mixing Engineers Handbook:
- Find the Direction of the Song: This is the first thing that a mixer needs to decide. If the song is going in a folk direction the mixing decisions will be different than a rock, pop, hip-hop, or country song.
- Develop the groove: Every piece of music has some sort of groove or pulse. Find what instruments define the groove (Hint: It’s not always the drums) and develop the song around that element.
- Emphasize the Most Important Element: In many songs, there is one element that is just as, if not more important than the groove. For instance, in a Mariah Carey song, the vocals are clearly the focal point and need to be treaded as such. The vocals are not always the most important element, however. An example would be a guitar driven garage band that puts more emphasis on loud electric guitars than the vocals.
I hope this information on mixing helps you guys with your mixes here on Indaba. Like any art, the most important skill is to find your inspiration and trust your instincts. If something makes you excited, don’t worry about whether or not it is the “technically correct” way – your ears and instincts are much more valuable than meters and rules. Happy mixing!
Monday September 22, 2008 at 05:00 PM
The Art of Mixing (Part 3 of 4) by Josh
Pan – Mixers use pan to position musical elements across the stereo field, from left to right on a two dimensional perspective. This is achieved by placing more or less of a sound signal in either the left or right channel of a two-channel mix. Pan is often overlooked as a major element in mixing, but good panning techniques are crucial for a clear, balanced, and interesting mix.
As with volume balance, the first step when working with pan is to identify the main musical elements of a mix. The common practice nowadays is to place the lead vocals of a track near the center of the spectrum, and to build backing tracks around the lead vocals on the sides. Mixers usually place bass-heavy sounds like the kick drum and bass guitar near the center too, as these sounds anchor the mix and will not interfere frequency-wise with the vocals or lead instrument. Some instruments that are recorded in stereo, such as piano and percussion, sound better to me when they exist across a broad spectrum of the stereo field, but if it is a complicated mix I usually try to confine them a bit more.
A common mistake among mixers is to pan a lot to the hard left or hard right (all the way to one side or another). This creates a build-up of sound on each channel and results in a muddy, unclear mix. For this reason, I like to place each element in a unique position across the stereo field, from just left of center to almost hard left or right. I like to save the far ends of the spectrum for effects such as reverbs and delays. This will create a clearer mix with more depth and interest. Pan can be used to prevent conflict between two instruments of similar frequencies. For example, panning an electric guitar towards the left and a synthesizer towards the right will eliminate a lot of the frequency clash between the two instruments.
Dimension – Dimension refers to the ambience or effects of a mix. This is done through the use of effects processors such as reverbs and delay.
Reverbs and delays are used to create a perception of three-dimensional space in a mix. Delays repeat the original sound at a specified time interval and volume level to create an “echo” effect. Reverbs do basically the same thing, but the repetitions are much shorter and more complex. Applying reverb and delays generally makes sounds appear further away, so they are a great way to create a sense of three-dimensional space in a mix. Here are some tips:
- Time your delays to the tempo of the song. Most software delays will do this for you, but here’s a quick formula for those working without an advanced DAW. ¼ note delay in milliseconds = 60,000/Song Temp (BPM) Use this number to find other length delays by dividing or multiplying the quarter note value.
- When layering reverbs and delays on top of each other, start with shorter length reverbs and delays, placing longer ones on top. This will create a better sense of space.
- One cool trick is to use several of time, short delays to create space rather than a big reverb – this will help maintain the clarity of the mix.
- EQ the reverb and delays to get the precise sound you want.
Dynamics – The volume change/envelope of a track
The manipulation of dynamics is done through the use of compressors, limiters, and gates. Pretty much universally applied in all major genres, with the exception of some classical and jazz music, compression is key to adding energy and life to a song.
Usually placed right before or after the EQ process of a track,
compression controls the dynamics of a sound. Compression is commonly
used to make an instrument or voice stand out and appear closer to the
listener. It is also often used as a last step to make a mix sound
louder and more exciting. Compression is very complicated, check out this Indablog post by PJ for more in-depth conversation on this topic.
Let me know your specific tricks for pan and dimension. Next week we’ll finish up with interest .
- Time your delays to the tempo of the song. Most software delays will do this for you, but here’s a quick formula for those working without an advanced DAW. ¼ note delay in milliseconds = 60,000/Song Temp (BPM) Use this number to find other length delays by dividing or multiplying the quarter note value.
Monday September 15, 2008 at 04:00 PM
The Art of Mixing (Part 2 of 4) by Josh
This week, I’m going to go into more depth on two of the five elements of a mix that I talked about last week in Part 1 .
Hopefully this information will make your collaborations here on Indaba sound even more professional!
To recap, balance refers to the volume relationship between different musical elements in a mix. Balance is probably the most clearly evident part of mixing – when you see an engineer adjusting the faders on a mixer, they are working on achieving a volume balance. You want to make certain musical instruments, such as lead vocals and a guitar solo, stand out over the backing elements of the song.
The key to achieving good balance starts with identifying what level of importance each musical element should have in the mix, in accordance to the song that you are mixing. For instance, in a Mariah Carey song, you should recognize that her renowned voice is going to be the key element, and adjust the levels/faders accordingly. However, if you are working on a techno song, often times the drum and bass elements are right up there with the vocals, if not at a higher volume level.
Don’t adjust balance purely on the dB readings or the light meters on your mixer, which can be deceiving. Due to the complexity of sound wave frequencies, sometimes a sound with a lower dB meter rating can appear to sound louder than one with a higher ‘reading.’ Use the meters as a guide, especially to prevent ‘clipping,’ but rely on your ears as the final
A quick primer: frequency refers to the “pitch” of a sound, from deep sub-sonic bass to shrill highs. Every instrument or is made up of a bunch of different frequencies, but each one has a prominent frequency centered somewhere along the spectrum. For instance, a bass guitar is heavy in the lower frequencies. The tool for adjusting frequencies is EQ, or equalization, which lets you raise and lower the volume (dBs) of each frequency.
The real complexity with frequency balance comes in when you combine several different instruments together. When you layer sounds that are heavy in the same frequencies, they tend to clash and create an unbalanced mix. For instance, a piano, bass guitar, kick drum, toms, guitar, often have a lot of overlapping frequencies in the low-mid range. When placed in the same mix, you’ll probably get a muddy sound. The trick to frequency balance is to make sure that all frequencies are properly and evenly represented. This can be done in several ways:
- Carve out a frequency spectrum for each individual offending instrument using EQ – Ex. lower the high-mids on a piano, raise the high-mids on a guitar
- Change the volume level of an offending instrument – Ex. lowering a bass drum signal will help fix the conflict with the bass guitar
- Remove offending instruments that are not crucial to the mix – Ex. take out an extra layered kick drum sample
I hope these tips help you out with your mixes. The most important tool is your ears, so if something doesn’t sound right to you, experiment with volume, pan, and EQ.
Next week I’ll continue with:
Let me know your thoughts!
Monday September 08, 2008 at 06:00 PM
The Art of Mixing (Part 1 of 4)
Part 1: The Basics – by Josh
With three contests going on at once here at Indaba – Minitek, /Passenger, and of course Mariah Carey – we’ve had an opportunity to listen to thousands of mixes. I’ve heard everything across the gamut – from quick, simple volume and pan mixes to sliced, chopped up, effects-loaded “re-imaginations.” In this three part mixing tutorial on the weekly Indaba Production Tips blog, I’m going to describe the elements of a good mix, and let everyone in on a few simple techniques that pro mixing engineers use in the studio.
Mixing is defined as the act of taking multiple tracks or elements of recorded and combining them together on the stereo field. While this sounds simple, mixing can get quite complicated when you think about all the different ways that you can alter and shape the sound of each track. My favorite book on mixing – the legendary The Mixing Engineer’s Handbook by Bobby Owsinski – demystifies the art by describing the six elements of a mix:
- Balance – Volume or loudness comparison of each track or musical element
- Frequency Range – Making sure that bass, mid-range, and treble frequencies are well balanced – usually through judicious use of EQ
- Pan – The placement of the various instruments across the stereo sound field
- Dimension – Adding effects and ambience to recorded tracks
- Dynamics – Adjusting the changes in loudness of each track or element
- Interest – Making your overall mix have that special quality
Professional mixers take these elements into account every time they make a mix, and you should too. There are countless methods and techniques for achieving good balance in each of the six elements; every mixer has their own way of doing things. Some aspects of mixing, such as volume and pan, are pretty straightforward. Others, like dimension and interest, are less clearly defined and require experimentation to create a “feel.” Generally, mixers start with balance, frequency range, and pan before moving on to effects and dynamics. Effects such as reverb, delays, and compression are usually applied in these later stages, and this where a mixing engineer really has the power to shape the sound. Next week I’ll talk specifically about the techniques used to create mix, but in the meantime here are a couple tips to get started:
- Listen critically to well-mixed songs: Pick out several songs that you think sound especially good, and try to identify the six elements above during the course of the song. Listen to the overall balance between the vocal and various instrument elements and how that balance was created. The more you do this, the better your ear for mixing and music in general will become. If you need a place to start, some standards for high mix quality are later Beatles albums (especially Abbey Road and Sgt. Pepper’s) and anything by Steely Dan. Does the song have that “special” sound that makes it unique? Can you put your finger on what creates that “interest?”
- Start simple: Identify the main musical elements of your song – ie. Lead vocals, elec guitar, drums, etc. – and think about which ones you want to bring out in the mix. Place the main element near the center of the stereo field using panning, and supporting elements more towards the sides. Make sure that the perceived loudness of the main elements that you want the listener to focus on is higher than the other elements.
Mixing is one of the most challenging and enigmatic aspects of music production, but it is also rewarding, creative, and fun. The more you think about the balance and clarity of your mix in terms of the overall sound and feel you are trying to create, the better it will sound. Stay tuned for Part Two on specific techniques and methods next week!
Thursday January 31, 2008 at 07:00 PM
From JoLynn Seaman’s Blog
I am in such a FUN mood, tonight! I finally had some time to "play" with my friends here at Indaba. I’m feeling so good I thought I’d post a little something, but first an update:
Thanks to Indaba’s recent contest, I’m enjoying my generous prize at Berklee. I’m taking "Advanced Mixing & Mastering with ProTools" (what the heck?!)… So far, so good. ;-) Anyway, with engineering fresh on my mind, I recalled an article in EQ Magazine a while ago that just made me giggle my socks off! I’m betting some of you can relate to this poor guy:
Spence Peppard (no realation to George), engineer at Encore Studios in the Nacogdoches (Willie Nelson fame), mostly records country and hard rock. He’s had his share of fits and was kind enough to compile a little "tip" list for us. So here is Spence’s five cents worth from "a hell of an angry guy":
1. One of us is not talented, and it’s not me.
2. Pro Tools will allow me to build a decent-sounding guitar solo for you, even though you can only play three notes of it at a time. Your "Frankenstein" solo will sound good on the CD, but you will still suck live. One of us is supposed to have lightning-fast fingers, and it’s not me.
3. If you want to use the studio as a $125-an-hour rehearsal space, fine. Bands used to come to the studio ready to record, but whatever. I can fix all sorts of mistakes with Pro Tools. One of us is totally unprepared, and it’s not me.
4. You planned to do the entire rhythm section of your recording with loops. Now you don’t like that it sounds canned? One of us is an idiot. That would be the part of US that doesn’t include me.
5. A little compression makes music sound better. A lot of compression can make a CD sound better when played in a loud environment. I would like to produce just one recording where the client doesn’t insist on every meter being a solid bar, where there are some dynamics, and where everything isn’t compressed into a big, fat crap-cake. One of us will do that every time they’re given the chance. Can you guess which one of us that would be?
….Whew! I just made my own day! :) [insert hysterical laughter link here] I hope it made yours too. -Stay beautiful! :)
**Cudos to my girl, Lauren, at Music Player Network for publishing clearance. :)
Friday December 07, 2007 at 11:00 AMThis epic article comes to us from Ashley Witt
Here is an article I did years ago about getting your samples to sound real.
"It won’t sound like that when the orchestra plays it". It’s very important to understand that when your music is playing in front of a producer and you say this, and of course start to explain why it won’t sound like your MIDI and sample rendition. You have to bring out a memory of the teacher on Charlie Brown, this is what the producer hears from you when you’re trying to explain "What it will sound like."
It could also be said that the most important thing is the quality of the samples. The considerations for "quality" of samples are not just the sound of the samples, which can actually be the last consideration when determining quality! Samples have to be both playable and recordable, which may sound strange, but there are samples that sound great but cannot be used! A good example of this are harp glisses; I could only dream of never seeing another library with those. Then there are samples that sound amazing but are not recordable; for instance French horn samples are most difficult to bring out in a mix if the samples have too much reverb on them.
Part of the secret to successful emulations is "knowing" the samples. Being intimate with your samples is very difficult because this requires a lot of time spent auditioning samples not just for playback sound quality but also for simple playability in a variety of contexts. Ideally every composer would create their own library from scratch, this creates an intimacy of knowledge of the sounds that is difficult to rival. Most people don’t have this luxury, so another way to become intimate is by creating your own patches from scratch with the samples you already have. This way you understand the limit of each sample as you play them, set volumes, ranges, envelopes, etc. You may never use the patches, but it is a great way to get to know them. This does however take time and even though this procedure can be highly recommended, for most it is impractical. Another way to quickly get to know some new samples that you have come across is by remaking a song you already know. It’s best to do a song in which you know every instrument by heart so there is nothing to think about except whatever is new to the equation, in this case, the samples and how to perform with them and have them sound as realistic as possible. You will be surprised how well this works.
Sample crossfading can be the most noticeable change in a composition that uses samples extensively. Using a crossfading patch with mediocre samples can sound much better then a non-crosfading excellent sample. The best example for this is on the brass instruments, since changing from pp to ff for brass is so drastic (of course, some percussion instruments exhibit drastic changes as well). When you have good samples to crossfade with, then what controller do you use to crossfade? Well, in orchestral we usually use the mod wheel, because on most controllers the mod wheel is very loose and it provides an easy and smooth way to crossfade. On some controllers the mod control is part of the pitch joystick or pitch wheel, thus crossfading with mod isn’t practical. Also with this type of controller the pitch controller springs back to zero making it impossible to leave the control set at a specific point in the crossfade. I believe that it is worth it to have a mod wheel on a controller that is separate from the pitch control, however in the case of a combined mod/pitch wheel controller, a midi slider becomes the obvious and most practical choice for crossfading. Overall, the only thing that matters when you are performing the sample is getting used to the crossfade points and the feel of the controller. The sampler actually plays the biggest role in the crossfading. This is somewhat difficult to explain, since each sampler reacts differently to crossfading and controllers. As comfortable as you might be with your sampler and the way it crossfades, there might be some samplers that do it better than others. I could start a riot by elaborating on this, so I’ll just leave it at that.
In most cases, as much as I would like to put 2 and 3 crossfades on a patch, it is not advisable. One crossfade from pp or p to f or ff is really the only practical way to go because of midi resolution and ram size.
Most of us that play orchestral samples from a keyboard have some kind of synth background. Because of this we will typically pull the pitch wheel down, hit a note then move the pitch wheel up for a portamento effect. For string instruments however, it is frequently done in another manner. A note is struck and the portamento is affected to the next note. Frequently there is also a role of the finger onto the second note. So how do we accomplish this? The following MP3’s and pictures illustrate one way of accomplishing this effect. The first MP3 is a solo cello played on the keyboard using a volume slider for expression. There is also a violin that comes in near the end of the cello line but there is no corresponding picture. The second MP3 is a solo violin played with a wind controller for expression. The wind controller also works better than a keyboard in the sense that it only has the ability to play one note at a time. This is helpful because to do the illustrated effect, the notes you are working with need to be butted up against each other. When playing a polyphonic keyboard it is somewhat difficult to have the same outcome. It may be helpful to set your keyboard to monophonic note control if it has the ability, otherwise you can edit the notes so that they are butted together. The following pitcures are from the Matrix Editor in Logic Audio.
The pitch bend is drawn in near the end of the note. With the pitch bend setting on my controller set at +6 (one whole step) I have topped out at about +/-48 before it started to sound funny. Obviously the smaller the leap from one note to the next, the smaller the amount of pitch bend but I also haven’t gone much below +/- 30 because it becomes unnoticeable. The peak of the pitch bend should be on the last tick of the first note then the pitch should be banged to 0 on the first tick of the second note.
I could go on and on about weighted action keyboard controllers but I will try to minimize. First of all and most obvious is the reason for weighted action. For most piano players this is what they are used to and feel that it is the only thing they can compose on. As most of these people would be very argumentative on anything that follows that statement, I must press on. As much as I love the abilities of some of the weighted action controllers, the main reason I would have one is to practice on and to play piano parts into compositions. There are problems with this though, and the most important problem being that I have yet to play or hear any piano samples that I could actually recommend. Every time I have been consulted on piano samples I recommend that a composer use any samples they are comfortable with but ALWAYS use the real thing for the final recording. Most of these people are good piano players and will cover the piano part in one or two passes in the studio, which will obviously not add a significant amount to studio cost. Please don’t send me email about great piano samples, I’ve most likely heard them and I have so many reasons for my impartiality at this point after hearing so many that I may have become immune to piano samples.
Back to weighted action. I have been at studios and watched composers play various orchestral instruments with weighted action controllers. The best way that I can explain what I see and hear, besides a piano player playing an instrument he/she has obviously never touched, is that (with my vivid imagination), I begin to visualize someone ripping the key right out of the controller and hitting it upside the head of a musician innocently blowing into their horn in an orchestra. In some cases you might as well just pick up the controller (with a little help of course) and chuck it over the string players, this is how brutally flat and fake the performance sounds. This is mostly with the wind instruments, I don’t really see this problem a lot with string ensembles. I am not saying that playing a wind instrument with a weighted action controller is impossible, all I’m saying is that it is difficult compared to synth action, wind control, or even a string controller, no matter how good you are as a piano player.
If you know your sample library intimately, if your controller is worn in, if you know the intricacies of the instrument you are trying to emulate with you samples, (hopefully you have played the instrument before), and if you have an open mind and don’t think that just because you are a virtuoso piano player you can play every other instrument just by using your piano abilities, then realistic sounding sampled emulations of orchestral instrument can be realized. Some of you may think I’m over-reacting, but most likely if any of this bothers you, then you are one of the people beating the piccolo player upside the head with the D2 key.
What, the piccolo doesn’t play a D2? I’ll make sure it’s on the test at the end. I really don’t like recommending a synth action keyboard to piano players because I know how difficult it can be to get accustomed to this feel, but, I also recommend a wind controller to people that have never played a wind instrument, or a percussive controller to a composer that has never played percussion. Whatever makes the composition sound more real is the whole point to this article of course. Don’t forget that the ability of a controller to control all aspects of midi is not all that important when it comes to orchestral music, for you don’t use most midi functions. It is most important that the controller can reveal the best quality of the sample. Of course if you buy a controller to increase your ability to make more realistic sounding compositions, I must reiterate re-creating a song that you know totally well, so that you are only concentrating on how to use your new equipment rather then creating something new. You don’t want to have to worry about getting used to a new equipment or new samples every time you sit down to write new music as you may end up damaging yourself by falling into a groove that can be created by a new composition, where your focus is taken away from the composition process, dictated by what you can and cannot do with your samples.
I believe that a wind controller will make the biggest difference in the production quality of a composition than any other tool. When I say wind controller, I don’t mean a breath controller, something you blow into while you play the keys. You might as well have a VC pedal to control volume if you’re using a standard breath controller. There is an advantage to the breath controller, but a wind controller that will give you complete control over breath, fingering, even bite and pitch. There are intricacies of wind instruments that can be mimicked pretty well on a keyboard controller, but just can’t be done to their fullest. They can be programmed, but this would take 50x the time that playing a wind controller would take, even if you have never played one. If you think about it, you will remember that "transpose" parameter on every track of every sequencer, even Performer has it now. On a saxophone fingering style wind controller, the C scale is almost as simple as having your eight fingers down and letting one go at a time starting with your left pinky. So, you’re playing in the key of Eb? Just transpose the track, play the part in C, when you get it right, transpose the object back to Eb for the score. It would be wise to remember to transpose it to the correct key for printing, forgetting could be embarrassing, providing you have to hand the printed score to a real player afterwards. There are also trumpet and violin style controllers, however, the trumpet style isn’t as simple to operate, the violin being even less simple and also has very limited control at this time. In my opinion, and as a violin player, these controllers are very disappointing. I will go out on a limb and make a recommendation for wind control, the Yamaha WX11 is the easiest, but if you want an excellent controller with a little more ability, the Yamaha WX5 is the way to go.
With a wind controller some might say that mod samples like on a Yamaha VL series are the best way to go, but what if you don’t like the sound of the pp oboe and want to change it? With a sampler obviously you can choose that sound rather easily. Always remember that the way a sampler reacts to control is important, especially wind control.
So playing a C1 on the Bass Clarinet is out of the instrument’s range? Well it’s great if you know the exact range of every instrument, but don’t forget that a virtuoso on any wind instrument can have a wider range than most other players. Playing out of the range can be considered in bad taste, and an arranger or a transcriber can criticize you for it. However, it can add a new and interesting element to a composition. Just be ready for the arranger, orchestrator, or transcriber to point out your mistake. Now, when I say "interesting" it is being used as a good word in this sentence, as opposed to:
Composer: "How do you like my new song?"
Composers friend: "Oh… it’s… ummm… interesting"
You will be surprised what kind of melodies you can come up with using a wind controller and will find yourself commonly creating the first line of a piece this way. It is best to go in knowing that you will be re-recording the wind-controlled part when other instruments are recorded. Remember that you are adding many new parameters to your sound when using a wind controller, you now have your diaphragm, lungs, lips, arms, and fingers all affecting the sound. This feeling you created before any other instruments will most likely sound better if re-recorded after the other instruments are added, because now you’ve added ambience to the other elements.
The most important paragraph
So you know the range of every instrument, you know what rosin is, you know that an oboe is a double reed instrument, you know that a trombone can have valves and that there is a wooden coronet, that a Faggot is a common word in the orchestra before our century. Even though you may know all of this, I cannot stress enough that all of this knowledge is only a small percent of the equation. You can be the most incredible piano player in the world, you can have a degree in music, but if you’ve never picked up an oboe or french horn and exploded some blood vessels in your head then you only have a minor understanding of the instruments.
So what if you don’t own any of these instruments or have easy access to them? Fortunately, most music stores rent just about every orchestral instrument for very reasonable prices – you’ll be surprised at how cheap rental rates can be, considering how much you can learn by simply renting an instrument for a couple of weeks.
Maybe thinking of it like this, imagine the Bassoon player that has never played the piano making a polyphonic song for the piano in one pass. He may know how the piano works and the theory surrounding it, but none of this matters, because he’s never played it. The Bassoon can hardly cover the intricacies of the piano, especially the polyphonic one. What about the great composers and conductors that do their composing on paper? You will find that most of these people have at least picked up a good percentage of the instruments that they compose for and have a physical understanding of every one of the instruments they write for. Please don’t even consider that I am saying that you HAVE to be able to play the instruments that you are composing for. I’m saying that having a physical understanding of a good percentage of the instruments will most likely improve your production and resulting quality of your composition. Even if you just pick up a trumpet mouthpiece and get a decent buzz sound from it, this can be very useful in understanding the amount of time it takes from the time you start blowing until the sound comes out, how long a player can play a note, how quiet the trumpet can be played before the lips stop vibrating, and if blown through long enough, how the sound changes as the lips warm up. All of this and even more can be learned from such a simple and quick blow into the mouthpiece. How could this be bad advice except if you’re in a car and someone slams on the breaks and you swallow the mouthpiece? Even then I could probably come up with something that could be learned by swallowing a trumpet mouthpiece, assuming one lives through the experience.
Wednesday December 05, 2007 at 11:00 AM
From: PJ (http://www.indabamusic.com/people/pjb)
Compression is commonly used to lessen the difference between loud and quiet parts of a recording and is often meant to be something that you don’t hear as an effect, but that helps to even out the sound and avoid drastic volume changes. With some creativity, however, it can also be a great effect that is meant to be heard. This tip will explain how to add a subtle "glow" to your recordings using compression. Check out the audio and visual "before and after" below.
This is a particularly good trick for things like acoustic and electric guitar as well as piano – especially when recorded from afar. The specific levels will obviously differ depending on the instrument, playing style, and recording but the general principles will apply to any piece of audio. After recording a clean track add a compression plug-in and do the following to achieve that subtle but special glow:
1) Raise the Ratio (you can even do it more than in the example below)
2) Crank up the Knee
3) Lower the Threshold
4) Raise the Gain
Essentially, raising the ratio you are telling the compressor to squash higher volume sounds into the same dynamic range as lower volume sounds. By lowering the threshold you are creating more room for really quiet sounds to come through… And, you are increasing the gain so that those lower input sounds output louder. The result is that sounds recorded from far away, or subtle details like guitar picking or breathing, will pop out and sound much more close, which creates a general glow around the recording.
A con of this effect is that quiet noises in negative space (like amp buzz between guitar notes) will be greatly amplified, so it sometimes helps to gate the track before compression to remove sounds below a certain volume.
Remember guys, if you have a recording tip send it to me at Streeter@Indabamusic.com
Friday November 16, 2007 at 01:00 PMFrom Dan Here is a mixing tip that will really get some punch in your snare, kick drum, and bass tracks. I first learned it about a year ago in a small mixing seminar with the great mixing engineer, Paul Special (Billy Joel, Aerosmith, Paul Simon, STP, Nike, VW, and so on). Essentially what you are doing here is duplicating a track, processing it differently by applying different effects and levels to each one, and grouping them to sound like one track. It works basically the same way for any of these three instruments, adjust ingredients to taste. For this example let’s take a kick drum track as an example.First, duplicate the track so you have two identical versions of the kick track. Next, apply a very liberal amount of compression to one of the tracks, getting it to an unnaturally tight, almost crunchy level. Once you have compressed the track, add some EQ to get the punch in the right frequency range (of your choice).Now, if you play these two tracks at the same time (the un-processed one and the new, processed one) you might here that they are slightly out of time. Even if you don’t, I promise they are. Adding a real time effect plug-in to one of two identical tracks will necessarily add some delay (probably about 5ms). This can be easily fixed by adding a "ghost" plug-in to the other, un-processed track, bringing them back in time together. Don’t worry about this getting them out of time with other tracks, it will generally be completely inaudible. A good ghost plug-in on these tracks is a compressor, without adding any compression. You are already adding compression to the new, processed, track, so by adding the same compressor, just at a minute level, will snap things back into time and if any "color" is added at all to the, otherwise un-processed, track it is consistent with the color on the second track.In order to get these tracks to actually sound good together, you have to adjust their relative volumes. The original track is really what you want to hear. The second track is really just to add that punch, so the volume should be significantly lower. You don’t want to hear your kick drum peaking and crackling, you just want to add the subtle crispness of it underneath the original sound. Adjust these levels while listening to the rest of the mix. There is no reason to isolate these sounds and try to get a good sound. It doesn’t matter how it sounds on it’s own, it matters how it sounds in the rest of the mix.Once you have a good sound group/lock the two tracks together so that, later on, you don’t accidently move one and not the other.And that, my friends, is multing! Simple but it makes a huge difference.
Recent Entries RSS
Friday August 06, 2010 at 12:00 PM
Wednesday August 04, 2010 at 03:40 PM
Wednesday July 28, 2010 at 07:30 AM
Tuesday July 27, 2010 at 11:11 AM
Monday July 26, 2010 at 04:18 PM
Sunday July 25, 2010 at 11:24 PM
Wednesday July 14, 2010 at 06:54 PM
Wednesday June 30, 2010 at 06:30 AM
Monday June 28, 2010 at 01:50 PM
Friday June 25, 2010 at 05:00 PM
If you want to know about IP law - this is the place. CC is defining the cutting edge of music licensing.
Stop making sense David Byrne. Seriously, you make too much sense to us - it's scary. When are you coming by to hang out?
Fairly relevant to Indaba :)
If you want to know what's happening in the new music world...
Wired + Music + Eliot = amazing
Our favorite NYC music-scene blog from our favorite CMJer.
Super-hip music blog. A must for anyone serious about the NYC scene.
ll the news that fit to print ... about music, that is.
Gawker Media's music blog. Perfect if you like a little snark with your music news.
In his own words - "First in music analysis"