I have been working with jazz pianist Steve Nixon (http://freejazzlessons.com) for a while now, and I got myself the opportunity to work with his Blues piano mentor: the legendary Bruce Katz! This new piano lesson DVD called the “Breakthrough Blues Piano Method” features my detailed transcriptions of Bruce’s Katz’s techniques as well as examples of his own blues improvisations. So, if you’re interested in getting your blues playing to another level, go visit http://www.freejazzlessons.com/breakthroughblues/ to get yourself a copy of this course.
About four years ago, I was in a rather curious phase in my journey as a composer. I was involved in writing and producing music that had what’s called “brainwave entrainment”. The works that I wrote for such a purpose pretty much sounded like this:
Take note that it’s best to listen to this music with your eyes closed and with headphones on:
This particular piece that I entitled “Night Sky” was released under a record label that was called a7records and is now known as Roundwaves. Now, what the heck is the purpose of all this? Music written with brainwave entrainment techniques (a.k.a. binaural beats) is part of what we can call “functional” or “applied” music i.e. music that is not solely written for simple listening pleasure or entertainment. Such music includes those used in film, video games, animation, etc. If music for movies enhances the viewing experience to a whole new level (try watching films without music, they suck!), brainwave entrainment music is designed to put you in a particular state of brain activity. Why? The theory is that setting your brain’s electrical activity into a particular phase will help facilitate various functions such as eliciting sleep, improving concentration, helping you to relax, excite you, etc. As it is universally known, music is a very powerful agent for altering your state of mind. You feel pumped up when listening to speed metal as you go across the freeway. You kind of feel very cheesy when you hear David Pack sing “You’re the Biggest Part of Me”. You kind of what to bob your head up and down when you hear some kind of four-on-the-floor drum and bass hit. Music with brainwave entrainment built into it is kind of like that too.
Now, the question is how do we actually go about writing music that is theorized to have the effect of relaxation, sleep, and other effects? Here goes:
- Know what kind of effect you want to elicit first before you go write your track. Do you want your listener to just relax and chill? You need your music to elicit an Alpha wave response. You want them to go to sleep? Go Delta wave. Go ahead and read up on what these brain waves are and what they’re associated with. Start by reading this Mental Health Daily piece.
- We need to generate the basic backing track for it, and that basic backing track is something that has a binaural beat that is equivalent to the brain wave activity you are trying to produce. To do this, you need two sine waves, tuned to a barely audible bass or contrabass frequency, one panned hard left and the other panned hard right. Now, it is VERY IMPORTANT that the two sine waves are tuned in such a manner that the difference between them will create an oscillating beat equal to that of the frequency of the brainwave you’re trying to elicit. For example, the sine wave to the left is tuned at 50 Hz and the sine wave to the right is tuned to 38 Hz. The difference between the two is 12 Hz, the upper limit of Alpha waves. The easiest way to do this is to use Audacity to generate these sine waves that are tuned to the exact frequencies you need. The length of this binaural beat track (or tracks) depend on how long you want your music to be. Usually 8 to 10 minutes is enough.
- Make sure that the sine waves you use for your binaural beat is in key to the music you are going to write. This is plain musical common sense. Why? First of, you want to make the music as pleasant sounding as you want. Tune your sine wave to a root or a fifth. Second, anything atonal or dissonant will only irritate your listener. For instance, if my music is in the key of G and I want Alpha waves, my left sine wave is in 24.5 Hz (G0 if A4 is 440 Hz) and my right sine wave is 36.5 Hz (about a microtone below D1 if A4 = 440 Hz). 37 minus 24.5 is 12 so I expect my binaural beat to match Alpha waves. In some instances, you may have to adjust the pitch of your sine waves accordingly if your music changes to distant keys. The point is that your sine waves (more or less) have to be in tune to the music.
- As for the amplitude of your binaural beats, it should be kept to a minimum as possible. You bury it in the music and it should be more felt than heard. This is the reason why we usually tune our sine waves to bass frequencies.
- When your binaural beats are set, write your music over the binaural beats. Notating it first on paper (or your scorewriter) or improvising over it doesn’t matter as long as you get to have appropriate music over it.
- Make sure that the music is LONG. We are not writing a radio hit here folks! Not everybody can fall asleep, concentrate, relax in just under a minute or two.
- You can write in any genre as long as it is appropriate for the effect that you want. You surely won’t want screaming metal guitars on your sleep music, right? It’s just common sense.
I suppose these steps should be enough to get you started in writing your first brainwave entrainment piece. If you all think I missed out on something, please leave your comments below.
It’s been so long since I last posted something here as I was very busy with graduate school activities and work as usual. I find it refreshing that I got some time now to write something. This past week, I completed a 10-minute piece which I submitted to my composition teacher, Dr. Kristina Benitez, and got some useful feedback from her. This coming trimester, my new task is to expand that piece into a multi-movement suite. Expanding it into a suite is very doable since that piece has a lot of ideas going on. The next question now is whether or not I can get it performed or at least be able to record a good mockup of it. Because of that, I started to explore the Sibelius 7 Sound Library.
These past few days, I was occupied with testing out the Sibelius 7 Sound Library on my MacBook Pro, and so I decided to dig up my musical history. I’m not very fond of listening to the old stuff I’ve written and recorded since it feels very much like reading your high school diary (the thought of which makes me cringe). However, in this case I wanted to hear what it would be like to try out my old compositions on a new sample library.
I use a number of sample libraries in my music production in various formats like NI Kontakt, Apple EXS24, etc. and I also used to have the old Sibelius 5 sound library. I was quite fond of it when it came out (even though it was far from perfect), and so it was very exciting for me to use that new library for the first time. Hearing my old, old works on new sounds gave it new life. It still sounds far from perfect of course but the Sibelius 7 Sounds are usable to create orchestral mockups. The percussion and piano sounds were excellent although the tremolo is still has that slight machine gun effect. The other samples like strings, woodwinds, and brass are okay. As with many sample libraries, I found the guitar samples to be less than satisfying. The classical guitar samples could have been good except that it can be oddly squeaky because of the default fret noise setting. Imagine hearing playing single scale notes with every note being accompanied by a fret squeak, and so it sounds so unnatural. The solution to that would be to dial down the fret noise knob. Steel string guitar sounds are okay. The distorted guitar sound is probably the most awful of the bunch. It’s a good thing that I’m a guitarist as well and so I wouldn’t need to use those guitar samples anyway.
I said a while ago that I’m not fond of listening to my old recordings but I did find something good about that little exercise. I was able to uncover musical ideas that I would call diamonds in the rough. Bits and pieces of melodic and rhythmic themes here and there would make good material to expand for a variety of compositions that I could craft in the near future. I just hope that I get the time and patience to further explore them.
So, going back to my graduate school composition work, I plan to expand that into a suite. I should probably start once the weekday hits, or maybe I should talk to Dr. Benitez first to plan it all out. After scoring it in Sibelius, I will tweak the hell out of the MIDI to make it somewhat realistic and then practice all guitar, iPad, and piano parts before recording. That should enable me to submit a good recording by the end of the term. Afterwards, it will be time to work on my graduate thesis. Seems like I should savor these light-load days as I will be very busy in the next few months .
One of the things that make many musicians scratch their heads are the modes. Let’s face it: They are so confusing yet in fact you need to learn and understand how to use them if you want to improve your musical skills and knowledge. We always hear how to use the modes in everything from writing songs to soloing over a complex jazz piece. In this piece, I’m going to show a couple of ways regarding how to understand modes.
Now, for us to understand this tutorial, we need to know what a major scale is and the names of the modes. Since we have seven notes in the major scale, we also get seven modes.
The Major Scale and it’s Relative Modes
We have this nice graphic above that shows our C major scale and its relative modes. We can easily play the each of the major scale’s relative modes by starting the same major scale at a different note and then we name the mode according to that starting mode. For example, if I want to play D Dorian, I just play the C major scale a.k.a. Ionian mode starting at D as a root. Sound-wise, you will notice that by starting the same scale at a different note, you rearrange the order of intervals. Add to the fact that you now consider the different note as the root note, you will tend to return to it every now and then, making you hear a scale that is very different from your original major scale.
And so, to figure out…
…the Ionian mode, we start our major scale at the 1st note (it’s just the same major scale, duh!) (I).
…the relative Dorian mode, we start our major scale at the 2nd note (ii).
…the relative Phrygian mode, we start our major scale at the 3rd note (iii).
…the relative Lydian mode, we start our major scale at the 4th note (IV).
…the relative Mixolydian mode, we start our major scale at the 5th note (V).
…the relative Aeolian mode, we start our major scale at the 6th note (this also happens to be our relative minor scale) (vi).
…the relative Locrian mode, we start our major scale at the 7th note (vii).
Figuring out the Parallel Modes
Figuring out how to learn and play the parallel modes (e.g. C major, C locrian, C Phrygian, etc.) is a trickier thing. There are a number of ways to do it. The technical way is to analyze our relative modes, check the order of intervals, and then apply that order of intervals to a particular root note. For example, I know that the Ionian mode/major scale follows the order of whole step (W)-W-half step (H)-W-W-W-H pattern of intervals. By looking at, say for example our E Phrygian in our relative modes section, we find out that the pattern is now H-W-W-W-H-W-W with a flat 2nd. So, let’s say I want to know what Bb Phrygian is, I figured out that it is Bb-Cb-Db-Eb-F-Gb-Ab. I can do the same procedure for the other modes. Quite taxing, isn’t it?
What if we put it this way instead? We can categorize each mode as major (if it has a major 3rd) or minor (if it has a minor 3rd). By figuring out the formula for each mode, I now have this shortcut:
Major modes = Ionian, Lydian, Mixolydian
Minor modes = Dorian, Phrygian, Aeolian, Locrian
All I have to do next is figure out which interval is different from that of our standard major and minor scale. Now, let’s assume that we already know that the major scale (Ionian) has a major 2nd, major 3rd, perfect 4th, perfect 5th, major 6th, and major 7th. Let’s also assume that we know our natural minor scale (Aeolian) as having a major 2nd, minor 3rd, perfect 4th, perfect 5th, minor 6th, and minor 7th. It’s time for us now to figure out how our other modes are built:
Dorian = Minor scale with major 6th instead of minor 6th
Phrygian = Minor scale with minor 2nd instead of major 2nd
Lydian = Major scale with augmented 4th instead of perfect 4th
Mixolydian = Major scale with minor 7th instead of major 7th
Locrian = Minor scale with with minor 2nd and diminished 5th
Still too difficult to figure out by this method? Okay, by doing it this way, it does involve some time to study. However there are easier ways.
We can use the relative mode order in order to figure out how to play a mode correctly. All you have to do is know what order does a particular mode appear to know the sequence order of the root note of that particular mode in a particular major scale. Confusing, right? Here’s a concrete example:
Let’s say that I want to play a D Mixolydian. Now, from the study of relative modes, I know that Mixolydian is the 5th mode and so its root note is the 5th note of a particular major scale, which I find out to be G in this case. And so, all I have to do play D Mixolydian is play the G major scale but start with the D.
Let’s also say that I want to play Ab Phrygian instead. Since Phrygian is the third mode, Ab is the third note of the Fb major scale. Now, you might say, “What the hell, Mark! There’s no such thing as Fb major.” Relax, I’ll explain it for you. From a strictly music theory standpoint, there is. But for the sake of practical use, it is just the E major scale, and so now we think of Ab as G# and then play the E major scale starting at G# to get ourselves the Ab Phrygian mode. I think that this is the simplest way of learning and playing the modes.
As for actual use in songwriting, composition, and soloing using modes, there are plenty of resources on the web for that. Anyway, you can always drop a line or two at the comments box if you have questions regarding modes and other stuff. Thanks.
One of the ways of understanding how many songs work in today’s contemporary context is knowing what exactly is a 12-bar blues. The latest project I was involved in is a GuitarZoom course called “Blues Guitar 101” by Lance Vallis. This new course features the basics of playing a 12-bar-blues, the form of the 12-bar blues, and a variety of ways of how to play both melodic parts/solos and rhythmic accompaniment.
Since I’m the one who had the task of transcribing most of the stuff that Mr. Vallis plays throughout the video, I do have some form of bias with regard to my opinion. It’s a very good blues course that features a lot of material that demonstrates Mr. Vallis’ prowess in improvising blues lines. He also offers a number of useful licks and phrases that you can instantly use in many blues jam situations.
If you want to get into the blues right away, I would suggest getting this course by clicking on the “Blues Guitar 101” graphic above. After mastering the various concepts presented, I suggest moving on to Steve Stine’s 96 Blues Licks.
GuitarZoom, one of my long-term employers, has another very useful course for anybody who’s interested in learning new songs on the guitar in a fast and intelligent way. This new course is called Songfire. Written by GuitarZoom’s guitarist-in-residence and professor of modern guitar at North Dakota State University, Steve Stine, This new course offers a no-nonsense approach that could enable any guitar beginner into hearing how most songs work and then eventually learn them in the process.
Now, for those who think they can become instant virtuoso players with this course, you are mistaken. This course is NOT about gaining virtuoso technique in an instant (it takes years of hard work and practice to gain that). This course is more about being able to play many of the songs you hear and enjoy on your guitar in a fast way. This course is not about being able to play your favorite songs note by note, but it is more about being able to understand the underlying harmonic structure (chord progressions, etc.) based on what you hear. Upon finishing the course, you’d be able to listen actively to many pop and rock songs and be able to play through its chord progression in a matter of minutes.
Okay, maybe some will be disappointed that virtuoso guitarist Steve Stine released a non-virtuoso course. Well, that is not the point of Songfire. Songfire’s intent is to give learners the ability to hear music as a whole and be able to at least play a semblance of it, following along the chord progression WITHOUT reading notation or tab. B ear in mind that seemingly simple, sort of entry-level topics like those in Songfire could easily lead any beginning guitarist to be stimulated into getting deeper into the workings of music, eventually figuring out what most of us would consider virtuoso technique. The positive and pleasurable effects of being to successfully learn a new song can always lead to bigger things, musically speaking.
If you want to get access to the course, just click on the Songfire image above. As usual, sheet music and text by none other than yours truly.
It’s was the first Saturday of the year that I had formally worked with both the choir and worship band of UCCP-Makati Church of Christ Disciples. It was tough and challenging yet at the same very fulfilling. I have seen the logistical challenges that I would face should I try to unite both choir and worship band. The task seems daunting but I hope for the best. I am really hoping that I’m being of any help to that rather small community of believers.
Just this morning was the time that I would call testing the waters. Although I had played with the worship band a couple of times, it was the first time I would be at my most active. I was directing the band while playing lead guitar. I played with the church’s regular pianist and choir conductor through a number of songs. I was trying very hard to demonstrate that there need not be a divide between a traditional piano-and-choir-group and a contemporary worship band. In my mind it should just be a single worship group that is engaged throughout the worship service. Next Saturday I will be hauling again a number of items from my home studio to the church, teach music theory and instrument technique in the afternoon, rehearse with both choir and worship band.
As things go at this time, it seems that the worship band isn’t ready yet for the rather technical aspects of playing the kind of music featured in the anthem section of the worship service. I aspire to be able to pass down whatever skills I have to the band and the choir so that every musical aspect of the service could be covered by both as a single unit. It doesn’t have to matter whether they are singing traditional hymns or covering the kind of stuff that Don Moen and Ron Kenoly would play. I am optimistic that this will happen given training and patience.
Like my studio persona, I am a teacher, equipment technician, musician, and music director rolled into one package. It’s tough work where I do not expect any remuneration of sort. What lies ahead of me are more challenges from both a personal and professional perspective. Why would I be crazy enough to put out such effort every week? It’s because I am answering the call of The Lord. I have no other justification for it. God has called me to use my skills for his purpose. I will abide by what I believe is my calling and purpose in life. So it has begun, my life as a volunteer music worker.
Good day. This is Mark Galang with another post about music production in compliance with the requirements for the Berklee College of Music course called “Introduction to Music Production”, hosted for free by Coursera. In this post, I will discuss how to use the five most important synthesizer modules. These are your oscillator, filter, amplifier, envelope, and low frequency oscillator or LFO. For this tutorial, I will be using three kinds of software synthesizers namely RGC Audio’s Z3ta +1, MinimogueVA, and Mothman 1966. We can also consider this tutorial as a sort of crash course into subtractive synthesis.
In any synthesizer (even those that play back samples), the oscillator is the sound source. It produces the waveform/s that you need to shape to produce the desired sound. The most basic parameter we get to control in an oscillator is the waveform selection. We usually have a number of waveforms to choose from including sine (fundamental frequency only), pulse waves such as square and triangle (fundamental frequency + odd harmonics), and sawtooth waves (fundamental + odd and even harmonics).
In the Mothman 1966, three waveforms are available called diamond (triangle), 8-bit saw (sawtooth), and wind (sine):
The MinimogueVA (obviously modeled after the Minimoog) has a couple more parameters other than standard waveform selection. You can adjust the tuning and the register of the oscillator as well as apply an overdrive (distortion) effect.
The Z3ta is the most complex of these softsynths. Its oscillator section has more choices for waveforms along with more parameters to shape them. There is even an option available for users to draw their own custom waveforms.
2. Voltage Controlled Filter (VCF)
More complex waveforms such as sawtooth can often sound harsh, and this is why a filter (more properly called voltage controlled filter or VCF) is present in all synthesizers. The filter functions much like an EQ except that in synthesizers, we can expect its parameters to change over a short period of time. The most common kind of filter in a synthesizer is a low-pass filter, the rationale being it is the best filter for cutting out brightness or harshness in the fastest way possible. In a synthesizer, the cutoff parameter is probably the most important. In a typical low-pass filter, raising the knob or slider for cutoff will raise the cutoff frequency meaning that you cut off less of the high frequencies and make the sound brighter. Lowering the cutoff knob will cut more high frequencies, making the sound of your oscillator darker.
One of the fun things about using these synthesizers is when you are modulating the filter’s cutoff, either manually or through an LFO. Sometimes you may want the realtime use of the filter cutoff to be more obvious. This is where the resonance parameter can be very useful. Increasing the resonance will make your use of the filter more pronounced. When the resonance parameter is up to a particular level, some of the high frequencies seep through as you turn the cutoff knob or slider to either direction.
The Mothman’s VCF features the basic control parameters:
In the MinimogueVA, the filter’s resonance is aptly called emphasis. Contour Amount adjusts the Q of the filter and velocity adjusts how fast the cutoff knob responds:
The Z3ta’s filter can be changed from the standard low-pass to others such as notch, band pass, and high pass:
The synthesizer’s amplifier works by raising the amplitude of the signal coming from the oscillator after it passes through the filter. The most basic control over the amplifier is the master volume section of the synthesizer as shown in all three featured synthesizers:
However, we can also have more specific control over the amplifier, allowing us to shape how each note is articulated. This is where we make use of the…
The envelope is one component of the amplifier that adjusts the amplitude of the sound at certain points over a very short amount of time. The amplifier’s envelope has four parameters:
Attack Time – The amount of time it takes for the signal to reach peak amplitude after a note on command (i.e. pressing a key).
Decay Time – The amount of time it takes for the signal to reach the designated sustain level.
Sustain Level – A designated amplitude level during the main sequence of the sound’s duration. The level of the sound after decay time has passed.
Release Time – The amount of time it takes for the sound to go from sustain level to zero after a note off command.
These parameters spell out conveniently as the acronym ADSR.
By adjusting these parameters, we can emulate the responses of various instruments such as the organ, violin, brass, piano, etc. For example, the organ has a “switch” type of envelope, and so we would set attack to 0, decay to 0, sustain level to any amount desired, and release to 0. If we want the synthesizer to have a piano-like response where the note dies off slowly after pressing a key, we set attack to 0, have a long decay time of about a few seconds, and then set sustain level and release time to 0. If we want the sound to “swell”, we set the attack time above 0.
The amplitude envelope generator is pretty much standard in all three featured synths, although the MinimogueVA has got envelope controls for filter as well and the Z3ta has additional parameters beyond the traditional ADSR:
5. Low Frequency Oscillator (LFO)
Other than willfully adjusting all the parameters of our synthesizers with our hands, you can assign an LFO to do this for you in a cyclical manner. An LFO typically operates at a frequency below the threshold of hearing, typically at a repetitive pattern determined by the kind of waveform used and the rate at which the LFO operates.
We can use the LFO to have control over the oscillator for vibrato effects, the amplifier for tremolo effects, and the filter for automatic filter sweeps.
The Mothman’s LFO can be assigned to the oscillator or filter. You can select the waveform as well as adjust its speed.
For the MinimogueVA, the third oscillator (OSC3) can be used as an LFO and can be assigned to various parameters:
As for the Z3ta, we can make use of the modulation matrix to route the LFO to control the other components of the synth ranging from the oscillator to the main volume control:
And so this ends a rather lengthy discussion about the five most important modules of any synthesizer.
It took me quite a while to write this tutorial but I think I could improve on this tutorial through video and audio examples. As of this time, I’m not capable of capturing video for a demonstration. If time permits, I will record some audio examples that demonstrate the functions of each synthesizer module.
Every serious guitarist wants mastery over the instrument wherein one would just look at the fretboard and find every note required to make a solo sound great instantly. Of course such a feat requires practice, but the new GuitarZoom course called “CAGED Made Simple” by Steve Stine helps achieve just that and more.
“CAGED Made Simple” teaches you how to take advantage of the five basic chord shapes to visualize the fretboard better, leading you to do a number of things such as learning songs faster, playing chords differently, and even soloing better. The best part of the course is that it’s taught by Steve Stine, professor of modern guitar at North Dakota State University. Sure thing you can find a number of guitar virtuosos teach CAGED at YouTube, but Steve explains things complex musical concepts in such a way that even little children would have no trouble learning it.
And so, if you want to give your guitar playing an upgrade, just click on the logo below:
P.S. Here’s a little bit of shameless self-promotion this time. If you think Steve’s awesome in this course, trying checking out…ahem…the sheet music (transcribed by none other than yours truly). It contains note-for-note transcriptions of Steve’s improvisations. Just saying….
Hello. My name is Mark Galang, and I’m here today to talk about riding the fader on a musical performance. This piece has been written in compliance with the peer-reviewed assignment requirement for the course “Introduction to Music Production” by the Berklee College of Music, hosted for free by Coursera.org.
Nothing is more satisfying than hearing a musical performance by humans. However, as much as we’d like human performance to be perfect, it is far from from being one. While the quirks of a live performance may sometimes be tolerated, studio recordings usually are more demanding. Therefore we use a couple of processes here and there to somewhat address imperfections, and one of these techniques is riding the fader. Riding the fader aims to Control dynamics over a recorded audio track in an effort to achieve some sort of balance I.e. to decrease volume of sections that are too loud and increase sections that are too soft. To demonstrate how to do this, I have opened up a project in Cakewalk Sonar 11, and I will be manipulating the bass track.
To start riding the fader, I have to enable automation write first by clicking on the W button on the bass track. You’ll notice that it would turn red as soon as I click on it. Once that’s been accomplished, I can now start recording automation once I press play or record. Let’s begin.
1. Opening a Project
For this assignment, I have used the same project I recorded for the previous piece (How to Prepare a Project and Record Audio in a DAW). I selected the bass track for this particular task.
2. Enabling Automation Write
To start actually recording volume fader movements (“riding the fader”), I clicked on the small button that looks like a “W”. It’s the automation write button. Once it turns red, I know that it has been enabled and I could then start recording fader movements after I hit the play or record button.
3. Riding the Fader
I started playing back the project and then manipulated the volume fader so that Cakewalk Sonar would begin recording my fader movement. Generally, I try my best to follow the shape of the waveform to somewhat preserve the actual dynamics I recorded during performance. I was aiming to somewhat reduce the amplitude of sections I felt I had played too loud.
4. Editing the Volume Envelope
Once I have recorded the volume fader movements, I can now see that Cakewalk Sonar has generated a volume envelope with nodes that I can move around. If I want to make adjustments to the envelope, I can just move the nodes either upwards to increase volume or downwards to decrease.
Upon completing the task of riding the fader, I realized that it is far from perfect. I was just using the mouse to perform this task and I think I would have achieved better results if I had a control surface connected to my DAW. I think that it would take me a while to edit the nodes in the automation that I wrote. I was not happy with the result. In the end, I decided to scrap my work and I would try another time to ride the fader (or perhaps use a compressor plugin).
I do think that riding the fader is a skill that takes as much precision as playing an instrument. It demands careful listening and practice to achieve good results without resorting to editing the envelope later. I’m not surprised that compressors were developed to automate this process.
I hope that this short piece has helped you in understanding how to control dynamics in musical recordings through riding the fader. If you have any comments, feedback or constructive criticism for me regarding this post, please let me know. I would be happy to read them as I would like to further improve myself. Thank you very much for your time and attention.