martes, 11 de marzo de 2014

Synthesis Modules

Hello! We finally (and sadly) arrive to the last week of this course, so, for this week we will learn a little about the 5 most important synthesis modules and its usage: Oscillator, Filter, Amplifier, Envelope, and LFO, according to lesson for week 6 of Introduction to Music Production at Coursera.org. Let's start!


Parts of a Synth - Lennar Digital Synth One Plugin over Cubase

The Oscillator (Voltage Controlled Oscillator - VCO)

The Oscillator is the module which creates the sound. This sound is based on different geometric waveforms, so the sound created depends on the selected shape of the waveform. When a note is played, in this module is where the signal will begin before feeding through the other modules of the synth. The most common waveforms are:
  • Sine Wave – Representing a single frequency with no harmonics.  Sounds very clear.
  • Sawtooth Wave – The sound is often fuller as it contains all harmonics.  It produces a sharp, biting tone.
  • Square Wave – Produces a reedy, hollow sound as it is missing the even harmonics. 
  • Pulse Wave – Produces a similar sound to the square wave but has the unique ability to have its width modulated.
  • Triangle Wave – Produces sound like a filtered square wave however the higher harmonics roll off much faster.
  • Noise - If the vibrations do not follow a discernible pattern, the waveform can then be represented randomly and the sound is called noise.
 
Oscillator - Lennar Digital Sylenth 1

The Filter (Voltage Controlled Filter - VCF)

After passing the Oscillator the sound then enters the Filter, module who blocks some frequencies while let others go through. Filter type is most often a Low Pass Filter, used to reduce the high end drastically.

The other common filter types include High Pass, Band Pass and Notch. Whichever type of filter is used however, it can be modulated by adjusting the filter Cutoff (point on the frequency spectrum at which the filter begins to take effect).  This modulation or sweeping movement of the Cutoff can create sounds that start bright and end dull for example, by sweeping the Cutoff of a Low Pass Filter across the frequency spectrum from high to low. These adjustments can be made to perfect a particular patch, or modulated over time to create a dramatic effect.


Filter - Lennar Digital Sylenth 1

The Amplifier (Voltage Controlled Amplifier - VCA)

After the signal is modified by the filters, then passes through to the Amplifier which is usually a unity-gain amplifier which varies the amplitude of a signal in response to an applied control voltage. The response curve may be linear or exponential.

The Amplifier determines the instantaneous volume level of a played note, and it quiets the output at the end of the note. A VCA may be referred to as being "two quadrant" or "four quadrant" in operation. In a two quadrant VCA, if the control voltage input drops to less than or equal to zero, the VCA produces no output. In a four quadrant VCA, once the control voltage drops below zero, the output gain rises according to the absolute value of the control voltage, but the output is inverted in phase from the input. A four quadrant VCA is used to produce amplitude modulation and ring modulation effects.

The Envelope

The Envelope modulator is attached to the Amplifier to control exactly how it moves by adjusting the parameters of ADSR and so reference is made to both of them together in this analogy. Although Envelopes can control different parameters, the last one in the synthesiser will usually be the Amplitude Envelope.  The ADSR controls are:
  • Attack time - the time taken for initial run-up from nil to peak level, beginning when the key is first pressed
  • Decay time - the time taken for the subsequent run down from the attack level to the designated sustain level
  • Sustain level - the level during the main sequence of the sound’s duration, until the key is released
  • Release time - the time taken for the level to decay from the sustain level to zero after the key is released
These four controls define a path for the sound to follow and adjustments to each of these will have an impact on the sound we hear.  For example; An organ envelope represents a switch, functioning almost “on and off”, and we can use it to emulate blown or bowed instruments that hold sustaining notes. To emulate a percussive sound, the Sustain level will be at zero.  The length of the note will be controlled by the Decay time. For a punchier sustaining sound however, like that of a trumpet, the Sustain level can be set somewhere in the middle to allow a decrease by Decay.

Envelopes can be used really well on an existing patch to modify and perfect it until it sounds just right.


Amplifier - Envelope - Lennar Digital Sylenth 1

Low Frequency Oscillator (LFO)

The final of the five main modulators is designed to control any other parameter within the synthesiser. Unlike the envelope which starts and finishes, the LFO is cyclical (like a rhythmic pulse) and because this means that it repeats over time, it can be used to control the Voltage Controlled Oscillator (VCO), creating changes in pitch to achieve Vibrato.  It can also control the Amplitude, creating Tremolo.  And it can control the Cutoff frequency of the Filter, creating a ripple effect.

We cannot hear sound through an LFO and so it will always require a Destination or output because Low actually indicates Lower that the range of human hearing (below 20Hz).  Once this Source to Destination configuration has been arranged within the Synth, the amount or extent of modulation can be applied and it can be applied in various ways or “waves”. Because the LFO is an Oscillator it too is set at different waveforms. When controlling the the VCO for example it can create different sounding vibratos depending on the wave shape. And increases to the LFO amount increases the frequency variations in the VCO response.


LFO - Lennar Digital Sylenth 1

Reflections

It's a little difficult to describe the tones and textures of a sound or note produced by a synth, but once we understand how these modules work and interact we can use synthesis as a type of language to accurately express and emulate almost any type of sound. I hope you find useful this simplified explanation of the synthesis modules as it was for me. Thanks very much for reading again, don't forget to comment.

miércoles, 5 de marzo de 2014

Modulated Short Delay Effects

Hi everyone! This week we'll see some demonstrations about the usage, function and configuration of two important modulated short delay effects, which are Flanger and Chorus. This topic corresponds to lesson from week 5 of Introduction to Music Production at Couresera.org.

Delay effects


Delay is one of the most common effects used in audio production today. As we've seen before, delay is related to Propagation. A delay processor works by sending the input signal to the output at a later time (Delay time), then the signal is combined with the original (Mix control), and finally, signal is repeated determined number of times (Feedback control). Thanks to this, the delay effects give us a sense of space and illusion of dimension by using ‘repetitions’ of the original signal.

Between the delay effects we can mention: flangers, phasers, choruses, delays or reverbs, and to specify, the effects that make up the modulated short delays are choruses, phasers and flangers. For effect of this article, we'll work just with chorus and flanger.

Flanger and Chorus - Similar, but not the same



Flanging, and chorusing are pretty standard pieces in every engineer’s effects arsenal today, but while they may seem to offer a similar effect, they’re certainly not the same. The most simple difference between them is that the flanger uses a shorter delay than the chorus, as we can see in the next figure:


Flangers, phasers and choruses all work by producing a series of frequency notches that are slowly swept across the frequency bandwidth (that’s the modulation). We don't really hear the notches; we hear what's left in the frequency spectrum, which is a series of peaks. Flangers and choruses have a larger number of notches that are spaced harmonically.

Demonstration

The better way to demonstrate these effects is by listening and comparing the sounds over the configuration process of the effects. I'll show you some configurations used by me on a song of my band. Let's hear first the excerpt of the track without delay effects.


In the track there are two guitars recorded: an overdriven lead guitar and a clean rhythm guitar.

For this lesson I used the effects integrated in my DAW (Cubase 5). Let's start!

To add an effect, we have to go to the Insertions section of the track desired, click on any Insertion channel (there are 8 available per track), and select the desired effect. For Flanger and Chorus we have to click first on the Modulation menu and then select the effect.


Now, for the Lead guitar let's select a Flanger. The Cubase Flanger is the classic effect, with some stereo improvements. Here you can see the controls and default values. 



This effect has many settings to set up, like: 

  • Rate: Specifies the value of the note to synchronize the sweep.
  • Range Lo/Hi: Establishes the limit frequencies for the sweep.
  • Feedback: Determines the flanger type. 
  • Spatial: Establishes the stereo amplitude.
  • Mix: Adjusts the balance between the processed signal and the unprocessed.
  • Shape: Changes the waveshape.
  • Delay: Adjusts the initial time delay.
  • Manual: Determines the modulation adjusting the sweep.
  • Filter Lo/Hi: Determines the allowed frequencies to pass.
Moving and hearing the knobs for a while, I found a sound a liked with this configuration:



I chose this sound for being not so "metallic" and having a really soft sweep, which provides to the guitar a "deep and spatial" sound.

Now, let's work with the rhythm guitar. On this track, we insert a Chorus, by following the same steps as before with the Flanger. The Cubase Chorus is an one-phase effect, which works doubling the input signal with another slightly detuned.




The Chorus' controls are almost the same as the Flanger controls. The Range Lo/Hi controls are replaced by the Width, and the Feedback and Manual knobs disappear. The Width determines the depth of the chorus.



Similarly, I found a sound a liked with this configuration moving the knobs for a while:




This effect was chosen because gives the necessary depth for the rhythm guitar to fill the space and be a quite good backing all over the track.


Reflections

Once again thanks so much for reading and follow my articles. This is a very fun and important topic, because these effects are so commonly used in recordings, so important and useful. Personally, I have these two effects in pedals for my guitar, and investigating about the correct usage of the controls gives me the chance to get more great sounds and a better performance for my band. I really hope you could find this information useful as I did. 

As always, I invite you to take the audio files and explore new configurations besides the ones I showed you here. See you!

domingo, 23 de febrero de 2014

The effective use of the Compression

Hello again everybody. According to lesson for week 4 of Introduction to Music Production at Coursera.org, this week's topic is about how to use effectively a compressor in a musical context.

Basic concepts

Compression is one of the most important and used effects while recording. The songs we hear on the radio can go through 4 or even 5 different compressors. Due to the big importance, becomes useful the understanding of how compression works and how it can be applied efficiently.

Compression works by reducing the volume of the louder sections of a track and increasing the volume of the quieter sections, giving more consistency to tracks. This effect can also be used to bring out some important details: helps to control a vocalist with bad mic technique, limit distortion from loud sounds or increase the average level of a track, making it sound louder.

Compression Controls

Commonly, compressors offer some basic controls: threshold, ratio, attack and release. This controls allow to adjust the compression settings to get the desired results.

The threshold control, measured in dB, determines at what level the compressor kicks in to action. In other words, how loud the signal has to be before its volume is reduced, or the time the compressor has to wait before start working. Setting the Threshold high means less compression, and a low Threshold means compressor will do more.

When the input signal passes the threshold level, the gain is lowered by the ratio control. For example, with the ratio set at 2:1, an increase in input of 2 dB over the threshold will result in an increase in the output of 1 dB. Higher ratios means more compression applied, only to the signals which passed the threshold level.

The Attack and Release controls work together with the threshold control to determine when compression should begin and end. Specifically, Attack control refers to how much time waits the compressor to start processing the input signal once it achieves the Threshold, and Release control determines the time the compressor will keep processing the signal once this one has fallen below the Threshold level. It is important to keep in mind that if the input signal is above the Threshold for less time than the Attack setting, then compression may not occur. Setting a long attack allows transient waves to pass through before start compression; a short attack will cause the signal to be almost instantly compressed. On the other hand, a long release gives a natural sound, and a short release time will cause no compression instantly, causing an unnatural or distorted sounds.

How to adjust the compression. An example in Cubase

An important thing to have in mind is that every unit and every track is different. I recommend to start by setting a high threshold, low ratio and both the attack and release in a mid position. Then, move the threshold control down until feel that the compressor is reducing the signal. From there, adjust the controls bit by bit until get what we want, which may be a particular sound or a particular reduction in dynamic range. Here's an example using the Cubase Compressor over an clean electric guitar extracted from a track of my own music.



Following the instructions above, move down the Threshold until it reaches a desired level, which is after the compressor starts processing the signal. After that, set the Ratio level moving it bit by bit until get the desired sound. For this example, I moved the Threshold down to -30, and the Ratio to 6.



The Attack and Release times should generally correspond to the speed of the instrument. For example, bass tracks should normally use slow attack and release times while drum tracks usually sound best with fast times. Like we are using an electric clean guitar, I recommend to set long Attack and Release times. For this example I used 80 for the Attack and let the Release in 500, which is the middle position established before.


Once established the threshold, ratio, attack and release settings, we should look at the meters and increase the gain to avoid the gain reduction imposed by the compressor.  This can be done with the Make-Up control. Typically the gain on the compressor is set equal to the gain reduction shown on the meters.


Reflections

When I was learning the basics of compression with a compressor pedal for my guitar, I over-compressed the whole thing, what didn't help the sound at all. Compression is a great tool for mixing when we know how to use it well. Finding the right balance is the key, and the only way to do that is to practice and listen, with own mixes or with the mixes of our favorite music: some forms of music, especially acoustic music, sound best without any compression at all; electronic music, punk, and hip-hop often use huge amounts of compression; folk, country and other similar genres typically benefit from moderate compression.

Thanks once again for reading this week. I invite you to use the uncompressed audio in the first link and do these examples and experiment with some other values too. You can also listen to my band Tephros in the Soundcloud player at the right side of this page. We are re-recording the published tracks with the knowledge acquired in this course to get a better sound, and we are making it! Greetings and enjoy!

miércoles, 19 de febrero de 2014

Channel strip

Hi everyone! The topic we'll see this week is about the Channel Strip. Remember, this is according to lesson for week 3 of Introduction to Music Production at Coursera.org. The main purpose of this lesson is to teach how does a signal flow through a channel strip both in a DAW as in an analog mixing board, detailing every component, usage and position of the knobs.

¿What is a Channel Strip?


A Channel Strip is a device that allows the amplification of an input audio signal, and the control of sound effect levels applied to this one. A Channel Strip also allows the user to monitor critical listening and adjust equalization.

This device may be a stand alone unit or one of many units built into a mixing board (each one of the columns we see in there).


Parts of the Channel Strip: Analog Mixing Board and DAW

As we said before, an mixing board is composed by many channel strips, and we can find them analog (Mixer) and digital (DAW). The Channel Strip is normally composed by:

Input Section: This section contains XLR and Line inputs, which allows to introduce a input sound signal to the Channel Strip.
Trim knob: This knob is basically a preamp. Controls the input level of the signal.
Insert Section: The Insert Section allows to add or insert an external audio input to the original input signal, creating a species of mix in a single Channel Strip.
Aux sends: This knob controls a separate output for the current Channel Strip, being ussualy headphones or another mixer.
EQ Section: This section is composed by some equalization knobs which allows to manipulate the parameters of the input signal. We can see usually three knobs (High, Mid and Low equalization).
Pan knob: The Pan knob allows to control the relative level of the stereo (right and left) channels. It means that the input signal can be "moved" to the left speaker or to the right speaker according to the position of the knob, lowering the level of the opposite side. 
Mute button: This button silences or mutes the current Channel Strip.
Solo button: This button isolates the signal of the current Channel Strip, muting all others.
Fader: Controls the level of the output signal, which is sent to the master bus of the mixer, and controlled again by a master fader.

Another important thing to have in mind is that ussually a Channel Strip works with the signal flowing from the top to the bottom, but is not always this way. My personal mixer, a Peavey XR 680E Powered Mixer is a proof of this (sorry about the dust):


As you can see, this is a basic mixer. The input section (Line and Mic) at the bottom of the Strips. This mixer doesn't have a Trim knob, Insert Section, Aux Sends, Mute button and Solo button. But has an Effects Level Section, which adds some Reverb to the Channel, perfect for using in voices.

On the other hand, the DAW I use is Cubase 5. To open the Mixer we can select it from the Devices Menu or just pushing F3. The Cubase 5 mixer is like this:


The Channel Strip of the Cubase Mixer is visibly complete than my analog mixer, having all the sections already mentioned, and allowing a wider control over the signals worked in the DAW project. 

Reflections

The Channel Strip is a really important tool not just in the recording area; when we use a mixer in live performances or rehearsals, we should know at least the basic controls of these devices, to ensure a better mix and a great sound for the listeners. Thanks very much for reading again. This material wasn’t new to me but thanks to the reading and researching about this topic, I clarified some terms and concepts. I hope this you enjoyed this info and that this may be useful like it was for me. Comment!

domingo, 9 de febrero de 2014

The Analog to Digital conversion process

Hello everybody. This week we will be learning about the analog to digital conversion, according to lesson for week 2 of Introduction to Music Production at Coursera.org. 


So first of all, what is analog and digital?

As we said last week, sound is produced by pressure variations in the air. When a microphone converts the input signal into voltage (through a cable, an amplifier or stereo), the electrical current vary similarly to the variation in the air pressure generated by the sound wave. It is continuous and constant, like pressure variations. That´s why it is called analogue (similar). 

On the other hand, digital systems need to convert the input audio signal into digital data that could be processed by numerical calculations. This process results into digital information based on bits (basic unit of information in computing and digital communications). The term bit is a contraction of BInary digIT. 


A single bit can have only one of two values, and may therefore be physically implemented with a two-state device. So, having a single bit, the two values can be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), states (on/off), or any other two-valued attribute. The most common representation of these values are 0 and 1. But if we want to represent larger numbers, we have to start a bit string, collecting bits into "words", so a specific number of bits (word length) allows the representation of a specific number of values. 



How does it work? How do we convert one into the other? The analog to digital process



The device responsible for changing an analog signal into a series of numbers is the analog-to-digital converter (or A/D converter). It works by repeatedly measuring the amplitude (volume) of an incoming electrical pressure soundwave (an electrical voltage), and outputting these measurements as a long list of samples, assigning a binary value for each sample´s amplitude. In this way, a mathematical "picture" of the shape of the wave is created. Every audio interface has an Analog-Digital Converter.

The converter's sample rate dictates how often it measures the signal to generate a new value, splitting the signal into samples separated by identical time intervals. The amount of samples measured per seconds is called Sample Rate. The more frequently the converter measures the signal, the more accurate the resulting data. To capture the full audio spectrum up to around 20,000 cycles (or 20kHz), a sample rate of 44.1kHz is common. Higher sample rates make for increased treble response and a more "hi-fi" sound. Low sample rates sound duller and darker. 

Bit depth affects how many bits the converter uses for each numerical measurement of the signal. More bits equal a more accurate measurement, which explains why 16-bit CD audio sounds so much better than an 8-bit multimedia sound file. A low bit depth is like forcing the converter to measure the sound with a yardstick marked only in inches. A higher bit depth allows the converter much greater accuracy (a yardstick marked in 1/8th-inch increments, for example). 


It may help to picture the interaction of sample rate and bit depth as a grid used to measure an audio signal. A higher sample rate corresponds to better accuracy on the horizontal axis; greater bit depth corresponds to vertical resolution. 

With this process, the audio signal changes a fundamental characteristic: It stopped being a continuous to become a discrete signal, which is discontinuous, separated by steps, and this important because as we gain something with digital audio, we also loose something else. While digital audio offers high precision, simplified editing and processing, storage facility and audio quality over time, analog sound is known to provide more sound depth and warmth, though is prone to produce higher noise levels, and to deteriorate with time.




Reflections

This topic is quite complex and interesting, it's a really fascinating process that represents the basis of all current digital recording technology. Without this process, the digitalization of the sound signals would not be possible. I hope you find useful this simplified explanation of this important process. Thanks very much for reading again, don't forget to comment.

jueves, 6 de febrero de 2014

Visualizing Sounds

Introduction

My name is José M. Pérez. I am from Barquisimeto, Venezuela. This week we will be learning about Sound Visualization, according to lesson for week 1 of Introduction To Music Production at Coursera.org.


A little theory first...

¿What's sound? Sound is a vibration that propagates as a mechanical wave of pressure and displacement, through some medium (such as air or water). Although sometimes sound refers to only those vibrations with frequencies that are within the range of hearing. 

For visualizing sound, we have to know some other terms related, such as:

Amplitude

Amplitude is related to the extent of the wave, how wide it's moving, or how much the air compresses and rarefies as that wave form moves or propagates through the air. Visually, looks like the height of a waveform above or below its zero baseline. Amplitude is also referred to as signal volume, and looks like this: 


Frequency

Frequency refers to the rate or speed at which an audio source generates complete cycles in one second. This property determines the pitch of the sound. It is only useful or meaningful for musical sounds, where there is a strongly regular waveform.


Pitch

Pitch is closely related to frequency, but the two are not equivalent. Pitch is an auditory sensation in which a listener assigns musical tones to relative positions on a musical scale based primarily on the frequency of vibration. In other words, pitch is the frequency of a note determining how high or low it sounds.

Tools for visualizing sounds

Oscilloscope

Is a type of electronic test instrument that allows observation of constantly varying signal voltages, usually as a two-dimensional plot of one or more signals as a function of time. Non-electrical signals such as sound or vibration can be converted to voltages and displayed.

The oscilloscope measures the amplitude of the wave on the y (vertical) axis and time on the x (horizontal) axis. In the next example, Y axis is simply a linear scale with 0 as no signal and 1 or -1 as a fully saturated signal. You can see that the amplitude of the wave is decreasing over time, and that the frequency is growing higher because the distance between waves is shorter.


Spectrum Analyzer


A spectrum analyzer measures the amplitude of an input signal (y axis) versus frequency within the full frequency range of the instrument (x axis). The primary use is to measure the power of the spectrum of known and unknown signals. This provides a snapshot into the energy distribution by frequency. Since it is a snapshot, it doesn't contain information on how those energy distributions change over time. Common applications include measurements of distortion, and to measure the harmonics within a signal.



Sonogram

Is an instrument that separates an incoming wave into a visual representation of the spectrum of frequencies in a sound or other signal as they vary with time or some other variable. The sonogram represents time on the x axis, frequency on the y axis, and amplitude mapped by color change. It gives the clearest picture of how our ears actually hear over time. In the next sonogram we can see that the amplitude of the wave is decreasing over time while the frequency is increasing. It provides a good sense of frequency increasing as decibel level decreases.


Reflections

Thanks very much for reading. The material wasn’t new to me but I needed to write a script to reafirm and clarify the terms, and when I was doing this post I learned some things that I was missing. I hope this info may be useful like it was for me, and I would love to know if I got everything right. If there's anything I missed or could have explained better, please comment and I would be happy to modify or correct what's necessary.