Explorations Into Jazz

I love listening to jazz. I also happen to love playing it as well (at the very least I try to). In another effort to prostitute myself to cyberspace as a musician a.k.a. shameless self promotion, here are some recordings I did while attending Gary Burton’s Jazz Improvisation course via Coursera. These early 2013 recordings are my (futile) attempts at improvising over jazz standards using mostly piano and/or guitar plus a melodica on the Chick Corea/Return to Forever classic “500 Miles High”:

I hope that you (whoever you are and wherever you might be) enjoyed the sort of jazz crap I’ve been trying to spew out from my innermost being (other than my own original works).

Advertisements

Using the Five Most Important Synthesizer Modules

Good day. This is Mark Galang with another post about music production in compliance with the requirements for the Berklee College of Music course called “Introduction to Music Production”, hosted for free by Coursera. In this post, I will discuss how to use the five most important synthesizer modules. These are your oscillator, filter, amplifier, envelope, and low frequency oscillator or LFO. For this tutorial, I will be using three kinds of software synthesizers namely RGC Audio’s Z3ta +1, MinimogueVA, and Mothman 1966. We can also consider this tutorial as a sort of crash course into subtractive synthesis.

1. Oscillator

In any synthesizer (even those that play back samples), the oscillator is the sound source. It produces the waveform/s that you need to shape to produce the desired sound. The most basic parameter we get to control in an oscillator is the waveform selection. We usually have a number of waveforms to choose from including sine (fundamental frequency only), pulse waves such as square and triangle (fundamental frequency + odd harmonics), and sawtooth waves (fundamental + odd and even harmonics).

In the Mothman 1966, three waveforms are available called diamond (triangle), 8-bit saw (sawtooth), and wind (sine):

01a - Mothman 1966 Osc

The MinimogueVA (obviously modeled after the Minimoog) has a couple more parameters other than standard waveform selection. You can adjust the tuning and the register of the oscillator as well as apply an overdrive (distortion) effect.

01b - MinimogueVA Osc

The Z3ta is the most complex of these softsynths. Its oscillator section has more choices for waveforms along with more parameters to shape them. There is even an option available for users to draw their own custom waveforms.

01c - Z3ta Osc

2. Voltage Controlled Filter (VCF)

More complex waveforms such as sawtooth can often sound harsh, and this is why a filter (more properly called voltage controlled filter or VCF) is present in all synthesizers. The filter functions much like an EQ except that in synthesizers, we can expect its parameters to change over a short period of time. The most common kind of filter in a synthesizer is a low-pass filter, the rationale being it is the best filter for cutting out brightness or harshness in the fastest way possible. In a synthesizer, the cutoff parameter is probably the most important. In a typical low-pass filter, raising the knob or slider for cutoff will raise the cutoff frequency meaning that you cut off less of the high frequencies and make the sound brighter. Lowering the cutoff knob will cut more high frequencies, making the sound of your oscillator darker.

One of the fun things about using these synthesizers is when you are modulating the filter’s cutoff, either manually or through an LFO. Sometimes you may want the realtime use of the filter cutoff to be more obvious. This is where the resonance parameter can be very useful. Increasing the resonance will make your use of the filter more pronounced. When the resonance parameter is up to a particular level, some of the high frequencies seep through as you turn the cutoff knob or slider to either direction.

The Mothman’s VCF features the basic control parameters:

02a - Mothman 1966 VCF

In the MinimogueVA, the filter’s resonance is aptly called emphasis. Contour Amount adjusts the Q of the filter and velocity adjusts how fast the cutoff knob responds:

02b - MinimogueVA VCF

The Z3ta’s filter can be changed from the standard low-pass to others such as notch, band pass, and high pass:

02c - Z3ta VCF

3. Amplifier

The synthesizer’s amplifier works by raising the amplitude of the signal coming from the oscillator after it passes through the filter. The most basic control over the amplifier is the master volume section of the synthesizer as shown in all three featured synthesizers:

03b - Mothman 1966 VCA

03a - MinimogueVA VCA

03c - Z3ta VCA

However, we can also have more specific control over the amplifier, allowing us to shape how each note is articulated. This is where we make use of the…

4. Envelope

The envelope is one component of the amplifier that adjusts the amplitude of the sound at certain points over a very short amount of time. The amplifier’s envelope has four parameters:

Attack Time – The amount of time it takes for the signal to reach peak amplitude after a note on command (i.e. pressing a key).

Decay Time – The amount of time it takes for the signal to reach the designated sustain level.

Sustain Level – A designated amplitude level during the main sequence of the sound’s duration. The level of the sound after decay time has passed.

Release Time – The amount of time it takes for the sound to go from sustain level to zero after a note off command.

These parameters spell out conveniently as the acronym ADSR.

By adjusting these parameters, we can emulate the responses of various instruments such as the organ, violin, brass, piano, etc. For example, the organ has a “switch” type of envelope, and so we would set attack to 0, decay to 0, sustain level to any amount desired, and release to 0. If we want the synthesizer to have a piano-like response where the note dies off slowly after pressing a key, we set attack to 0, have a long decay time of about a few seconds, and then set sustain level and release time to 0. If we want the sound to “swell”, we set the attack time above 0.

The amplitude envelope generator is pretty much standard in all three featured synths, although the MinimogueVA has got envelope controls for filter as well and the Z3ta has additional parameters beyond the traditional ADSR:

04a - Mothman 1966 Envelope

04b - MinimogueVA Envelope

04c - Z3ta Envelope

5. Low Frequency Oscillator (LFO)

Other than willfully adjusting all the parameters of our synthesizers with our hands, you can assign an LFO to do this for you in a cyclical manner. An LFO typically operates at a frequency below the threshold of hearing, typically at a repetitive pattern determined by the kind of waveform used and the rate at which the LFO operates.

We can use the LFO to have control over the oscillator for vibrato effects, the amplifier for tremolo effects, and the filter for automatic filter sweeps.

The Mothman’s LFO can be assigned to the oscillator or filter. You can select the waveform as well as adjust its speed.

05a - Mothman 1966 LFO

For the MinimogueVA, the third oscillator (OSC3) can be used as an LFO and can be assigned to various parameters:

05b - MinimogueVA LFO

As for the Z3ta, we can make use of the modulation matrix to route the LFO to control the other components of the synth ranging from the oscillator to the main volume control:

05c - Z3ta LFO

And so this ends a rather lengthy discussion about the five most important modules of any synthesizer.

It took me quite a while to write this tutorial but I think I could improve on this tutorial through video and audio examples. As of this time, I’m not capable of capturing video for a demonstration. If time permits, I will record some audio examples that demonstrate the functions of each synthesizer module.

Riding the Fader on a Musical Performance

Hello. My name is Mark Galang, and I’m here today to talk about riding the fader on a musical performance. This piece has been written in compliance with the peer-reviewed assignment requirement for the course “Introduction to Music Production” by the Berklee College of Music, hosted for free by Coursera.org.

Nothing is more satisfying than hearing a musical performance by humans. However, as much as we’d like human performance to be perfect, it is far from from being one. While the quirks of a live performance may sometimes be tolerated, studio recordings usually are more demanding. Therefore we use a couple of processes here and there to somewhat address imperfections, and one of these techniques is riding the fader. Riding the fader aims to Control dynamics over a recorded audio track in an effort to achieve some sort of balance I.e. to decrease volume of sections that are too loud and increase sections that are too soft. To demonstrate how to do this, I have opened up a project in Cakewalk Sonar 11, and I will be manipulating the bass track.

To start riding the fader, I have to enable automation write first by clicking on the W button on the bass track. You’ll notice that it would turn red as soon as I click on it. Once that’s been accomplished, I can now start recording automation once I press play or record. Let’s begin.

1. Opening a Project

01-Opening a Project and Selecting Bass Track

For this assignment, I have used the same project I recorded for the previous piece (How to Prepare a Project and Record Audio in a DAW). I selected the bass track for this particular task.

2. Enabling Automation Write

02-Enabling Automation Write

To start actually recording volume fader movements (“riding the fader”), I clicked on the small button that looks like a “W”. It’s the automation write button. Once it turns red, I know that it has been enabled and I could then start recording fader movements after I hit the play or record button.

3. Riding the Fader

03-Riding the Fader

I started playing back the project and then manipulated the volume fader so that Cakewalk Sonar would begin recording my fader movement. Generally, I try my best to follow the shape of the waveform to somewhat preserve the actual dynamics I recorded during performance. I was aiming to somewhat reduce the amplitude of sections I felt I had played too loud.

4. Editing the Volume Envelope

04-Editing the Volume Envelope

Once I have recorded the volume fader movements, I can now see that Cakewalk Sonar has generated a volume envelope with nodes that I can move around. If I want to make adjustments to the envelope, I can just move the nodes either upwards to increase volume or downwards to decrease.

Upon completing the task of riding the fader, I realized that it is far from perfect. I was just using the mouse to perform this task and I think I would have achieved better results if I had a control surface connected to my DAW. I think that it would take me a while to edit the nodes in the automation that I wrote. I was not happy with the result. In the end, I decided to scrap my work and I would try another time to ride the fader (or perhaps use a compressor plugin).

I do think that riding the fader is a skill that takes as much precision as playing an instrument. It demands careful listening and practice to achieve good results without resorting to editing the envelope later. I’m not surprised that compressors were developed to automate this process.

I hope that this short piece has helped you in understanding how to control dynamics in musical recordings through riding the fader. If you have any comments, feedback or constructive criticism for me regarding this post, please let me know. I would be happy to read them as I would like to further improve myself. Thank you very much for your time and attention.

How to Prepare a Project and Record Audio in a DAW

Hello dear readers. It’s Mark A. Galang again in another installment of audio production tutorials. This tutorial was written in compliance to the peer review assignment requirement of the Berklee Course “Introduction to Music Production” being hosted by Coursera. I do hope that you all find this tutorial to be informative.

This tutorial features the way how I prepare a project in my DAW for recording. It also gives some insight into how I compose and record music. I use Cakewalk Sonar X1 as my DAW software. Let’s get started.

1. Sequencing the Drums

01 Sequencing the Drums

Before I actually create a project in Sonar, I usually write drum parts, orchestral parts, etc. using Sibelius 6. In this case, I just wrote the drum part for this project.

2. Exporting to MIDI

02 Exporting to MIDI

After writing the drum part in Sibelius, I would then save my work and then export it as a MIDI file to the folder of my choosing.

3. Creating a New Project

03 Creating a New Project

After opening Sonar X1, I make use of an atypical method of creating a project. I close the project creation wizard and then just drag the MIDI file I created into Sonar. Sonar will automatically open the MIDI file as a project.

4. Creating an Instrument Track

04 Creating an Instrument Track

Once the MIDI file has opened, I would then create an instrument track that would play back the MIDI data in the project. In this case, I’m using a VST instrument called EZDrummer. An instrument track is a combination of a MIDI and Audio track. The data displayed is MIDI but the playback comes from an audio source, usually a software instrument.

5. Transferring MIDI data to Instrument Track

05 Transfering MIDI Track to Instrument Track

Instead of assigning EZDrummer as the output for my MIDI track, I just simply drag the MIDI data into the instrument track and then delete the resulting empty MIDI track. The instrument track can read MIDI data anyway so I have no further use for the empty MIDI track.

6. Creating an Audio Track

06 Creating an Audio Track

I would then create an audio track next by right clicking on the empty space where the channels are supposed to be in Track View and then selecting the “Insert Audio Track” command.

7. Labeling Audio Track and Setting Up for Recording

07 Labeling Audio Track and Setting Up Channel for Recording

After creating the audio track, I would then label the audio track. In this instance, I’m recording a bass guitar track so I simply label it “Bass”. Afterwards, I select the appropriate input source for my audio track. In this case, my bass is connected to the left instrument input of my audio interface and so I select the left one in my DAW. If I select it this way, I will be able to record my bass part in mono.

8. Saving as a Project File

08 Saving as a Project File

Because Sonar opened my project as a MIDI file, it cannot save audio data yet. I would then save the project as a “Normal” CWP (Cakewalk Project) file with the “Copy All Audio With Project” option ticked so that I can assign the project and audio data folders for easier file management.

9. Arming the Audio Track for Recording

09 Arming the Audio Track for Recording

Before I begin recording, I then click on the red button in my audio track so that it would be “armed” for recording. Once the audio track is armed, I check my instrument’s recording levels on my audio interface and on the DAW. I am now ready to record my bass parts.

10. Setting up Metronome/Click and Countoff

10 Setting up Metronome or Click and Countoff

Before I start recording, I check my metronome/click and then see if I have the correct settings. I prefer using an audio click rather than MIDI and I set up the record count in to just “1”. Since the time signature in my project is 7/8 with a tempo of 100 bpm (in quarter notes), I expect to hear seven fast clicks before the DAW starts recording my audio.

11. Recording an Audio Track

11 Recording Audio

Once the levels are set and the audio track is armed, I start recording by pressing “R” on my computer keyboard. I count along to the count-in clicks (one, two, three, four, five, six, sev) and then start playing my bass parts. Once I’m done recording, I press the space bar to stop.

12. Cloning an Audio Track for a Second Take

12 Cloning an Audio Track for Second Take

Because I need to have a couple of recorded options, I record a number of takes. To do this, I just clone the audio channel where my bass is recorded. To do this, I just right-click on my audio track and select the option “Clone Track”. Sonar will then duplicate the audio track in its entirety.

13. Setting up Cloned Audio Track for a Second Take

13 Setting Up Cloned Audio Track for Second Take

The cloned audio track contains all of the data from the previous audio track, including recorded audio. Therefore, I would delete the recorded audio from the cloned track in order to empty it so I can begin recording a second take. To lessen distractions, I would then mute the original audio track before I record my second take.

14. Recording a Second Take

14 Recording a Second Take

Once my cloned audio track is ready, I would then record a second take following the steps mentioned a while ago.

After completing all of these steps, I think the entire effort went well. I was able to set up a project and record an audio track. Upon reviewing the project, I think that I should have saved the project immediately as a normal DAW project before setting up the audio track so that I wouldn’t run into a problem later should the application crash. Some of the steps I took to create the DAW project are atypical. However, this fits my usual workflow which involves composing and notating music first before recording audio.

For those who are interested, here’s the track I recorded for this particular tutorial:

I hope that you all have enjoyed reading and learning about recording audio in a DAW through this post. Thank you for your time and I hope to hear from you. If you have any feedback, comments, or constructive criticism, please feel free to let me know as I would love to learn new things as well.

How to Record an Electric Guitar or Bass without an Amplifier

Good day. My name is Mark Galang, and I am a freelance musician and composer from Paranaque City, Philippines. This is my first peer-reviewed assignment for Introduction to Music Production, a Coursera.Org course provided by the Berklee College of Music. Part of my work involves recording electric guitar and bass parts. Certain circumstances prevent me from recording with an amplifier (lack of a good room and neighbors), and so I usually record without one. This tutorial will teach you how to record an electric guitar or bass without an amplifier.

For this tutorial, we need the following equipment:

A computer with a DAW of your choice installed (I’m using a PC with a copy of Cakewalk Sonar)

IMG_0582

USB or Firewire Audio Interface (TASCAM US-122 USB Audio Interface)

IMG_0581

Electric Guitar or Bass

IMG_0584

Instrument cable with 1/4″ plug (at least one)

IMG_0585

Studio monitors and/or headphones

IMG_0583

Guitar/Bass effects pedals and extra instrument cable/s (optional)

IMG_0586

1. Before you begin recording, make sure that your audio interface is connected to your computer via USB or Firewire and that your speakers and/or headphones are connected to the line out/headphone out of your audio interface.

2. Set the level of your audio interface’s instrument input channel to zero. If you have a device that you can switch between mic and instrument mode (LoZ and HiZ), please switch it first to instrument or HiZ mode.

IMG_0589

3. Plug in your electric guitar or bass into the audio interface’s instrument input (also known as guitar input) using your instrument’s cable. If you have effects pedals, make sure you have connected them as well in between the instrument and the audio interface.

4. Open up your DAW and create a new project.

New Project DAW

5. Select one of your DAW’s audio tracks and assign the input device where your electric guitar or bass is connected.

Select Input for Recording

6. Raise and adjust the level of the input channel at the audio interface where your instrument is connected. Before recording, play some music with the instrument first to check the levels. Make sure that the level is not too high to avoid distortion. Most audio interfaces have an LED or display that allows you to check levels. If the level is above 0 dB, it goes into the “red” zone (or causes a red LED to flash) meaning that the level is too high and it will cause distortion (not the good electric guitar amp kind!) or clipping to occur.

7. If your device has a direct monitoring feature, switch it on so that you can monitor your instrument during recording in real time. Otherwise, you can turn on your DAWs live monitoring feature. Live monitoring on a DAW, however, takes up more computer resources to run and suffers from some degree of latency.

IMG_0590

9. Once you’ve set up the appropriate levels, arm the audio track in your DAW and then you can start recording.

Recording in Progress

Here below is a link to an example of guitar and bass work that used the procedures described in this lesson:

Prior to signing up for this class, I had learned how to record in this manner through trial and error. As a result, I’ve had my share of badly recorded audio. I accept those mistakes though as the part of the learning process. All in all, I think the entire process or recording without an amplifier went well. Nowadays, I make use of a Digitech RP255 multi-effects pedal to emulate a variety of amp sounds and guitar effects. With the combination of the direct monitoring feature in my audio interface, I can hear my guitar processed with the effects that I want in real time. When using the direct monitoring feature when recording, I prefer to switch the audio interface to mono mode so I can hear the guitar’s output on both speakers (since it’s only connected to one input). Previously, I was using VST amp simulators and effects plus live playback monitoring in my DAW. As described in the lesson, the problem with this is latency plus the fact that VSTs use up computer resources.

Thank you for taking the time to read and evaluate this lesson. I do hope that I have presented the lesson accurately. If you have any feedback or if there’s anything else I could have done to explain things better, I would love to hear from you.