Home > Music making > Intermezzo (case study)

Intermezzo (case study)

Cavalleria Rusticana (~1880). The stairway, lying (resting?) guy and ladies panicking are reminiscent of the last minutes in the Godfather III.

The song

Warning: If you haven’t seen them yet, the links that lead to youtube videos can be spoilers.

I wouldn’t have been familiar with this song if not for the movies or TV shows that used it. It is probably best remembered for the tragic ending in The Godfather 3. I have actually seen that scene in a black and white furniture television when I was very young, but my dad, having watched it already changed the TV channel, so I did not remember the song from that movie. Next, I’ve seen my friend playing a black and white video wherein there is this boxer warming up in slow motion. As I eventually found out, that was from the opening credits of Raging Bull. It definitely has cult appeal and  I partially remembered the tune, but I wasn’t interested enough to dig further. It was not until the 31st episode of Rurouni Kenshin, that I really got interested with this song. By that time it has reached 31 episodes, the characters and story must have grown on me, hence making Intermezzo one of my favorite classical pieces. The unfolding story, artistic animation and the music, simply made a very powerful emotional combo (that could probably make a normal person cry).

The score

Since this is an old classic, it is very likely that a computer playable sheet music is available out there. PDFs are available at IMSLP, MIDIs are available from various sources, but the best I was able to obtain was from musescore.com, creater by MaestroMoi.  Since majority of the transcription has already been done, I had the luxury of neat picking further, almost obsessively, trying to make the score look  identical to the one available at IMSLP, and incorporating the additional woodwinds parts in a much recent transcription, also from IMSLP. I could have just used one of the google-able MIDI files, but that would have spoiled the fun and learning process.

From the score, I learned that the 2nd violins and A clarinets are played divisi (divided further into smaller groups playing different parts, it would look like double stops when notated). Hence, I should balance the volume to avoid making it sound that there are twice as much second violins or clarinets. Articulations markings in the score would also help me decide how to modify the MIDI.

Anyway, here are the files:

  • MuseScore score. Size A3.  I find the A3 size compact enough but still legible when scaling to an A4 printout.
  • MuseScore exported PDF. Note: some symbols are not optimally arranged.
  • MuseScore exported MIDI.
  • Skaiju edited/tweaked MIDI tweaked and articulated with Sekaiju.  This is 4.3 times larger than the “un-articulated” version.

The complexity of the score also helped me learned more of MuseScores features like putting notes on the next staff instead of using more ledger lines, and putting beams over notes in different bars (which starts to look odd if the bar is in a different line or page, revealing room for improvement in MuseScore). I also found it helpful to modify the score layout to make it the page as long as possible so I don’t have to navigate to different lines or pages when editing an instrument part.

Divide and conquer

At some point in the working process, it is easier to use sheet music for the individual instruments, instead of the full score.  Fortunately “parts” were also available from  IMSLP.  Since I focus at one instrument at a time when tweaking MIDI, it is convenient to have the full instrument part in one page. Extracted parts also speed up work by avoiding confusion or distraction from other instruments. It is also helpful to put the bar number in each bar, not just the first bar in each line  (you may want a print out). Since parts are not always available, re-writing the score in MuseScore would also give you this advantage.

Software

The usual suspects plus AAMS:

  • MuseScore: converting the visual score to something that can be made into MIDI
  • Sekaiju: Further MIDI tweaking (expression and articulation)
  • Synthfont: Rendering the MIDI with soundfonts
  • Freeverbtoo: For reverb
  • AAMS: For EQ mastering based on a reference recording

Samples

The samples used are mostly from the Sonatina Symphonic Orchestra, with a few exceptions

  • Organ. Jeux d’orgues
  • “Low harp” aCoUsTicBaSs from the Jazz Page. Since the lowest harp notes in the pieces are not audible with SSO.

I’m not very particular about these other instruments since the strings dominate the sound. The strings and woodwinds are also from SSO worked fine. I’m not quite happy with SSO’s harp, but I just let it be since it is not as loud as the rest. For reverb I used a cathedral preset in freeverbtoo. I actually thought whether I should go for an IR convolution reverb, but I probably made a “mistake” by starting out with the snappy and convenient “go-to” freeverbtoo, that it became difficult to make the piece sound the way I like with other reverb VSTs.

MIDI tweaking

Since I’m very familiar with how the song, this proved to require more effort than my usual re-arrangements. In short, I had higher standards because I had an easy and definite way of benchmarking, i.e. listening alongside an actual recording or the my mind’s “ear worm”. The score helped me decide how to tweak the MIDI by:

  • Slightly overlapping slurred/legato notes
  • Modifying the MIDI velocity depending on the note’s dynamics (e.g. p pp ppp). Actually MuseScore will take account of dynamics when exporting to MIDI, but you would still want to make adjustments to get it sounding right.
  • Separating notes that stick end-to-end. If the same note/pitch is played in succession without a gap in between, weird buzzing sounds would result sometimes. Also, it is likely that musicians in the real would also make a short pause in such cases. Can anyone bow the same note twice without pausing in between? Probably not, but the natural reverb and decay of the sound will feel in this gap, just like when a pianist steps on the sustain pedal.
  • Modify Expression (MIDI CC11) to control loudness, e.g. in notated crescendos and decrescendos, and where ever I feel like changing the volume. CC7, volume, also works similarly, but I did not tweak this part. I also used this to minimize the piercing or ringing sound from high notes at the end (from oboe and 1st violin). This also explains how the MIDI file got more than 4 times larger (still small at 47KB) since many data points are used to draw expression curves.
  • Randomize harp note starts by a slight amount to make it sound “human played.”
  • Add the low harp work-around. As I mentioned earlier I couldn’t hear the first low notes of the harp, so I added a supposedly pizzicato contrabass for that part, which was then rendered with an acoustic bass guitar.

I also used CC11 to apply a longer fade out on long notes. Without modifying this volume, the long note will only start fading out close to the end. This release time is about 0.5 seconds for SSO. It may not always sound natural if a long note is near full volume for most of the time then, just fade out for a  short time only towards the end. Musicians may play a long fading note. Especially since, unlike MIDI, musicians know beforehand how long the note should be. This may not always be the case though, so you should trust your ears in the end. I think this applies more if there is an anticipation of more silence after the long note, or if the long note ends a phrase.

Results

We’ve had enough talking, let’s now hear the music.

One difference I notice from real recordings is that there are more pronounced solo instruments. I.e., my ears can distinguish a solo violin on top of other violins. The ensemble sound provided by SSO is somewhat more “homogeneous”, or there is no dominating solo instrument within the ensemble. This may have to do with the mic placement in real recordings.

Automatic mastering using reference recordings

I also tried out this new cool tool, the Automatic Music Mastering System. AAMS adaptively applies EQ settings on an audio file based on references or even based on other recordings you have. Unlike the usual EQ with loads of presets, AAMS first analyze the original audio file, and then decides how much of each frequency band to boost or attenuate. And it uses a lot of frequency bands instead of just a typical smoothened out curve with 3 to 10 peak regions. What I’ve done is choose a reference recording, have AAMS analyze its spectrum, and make my MIDI orchestation output have the same spectrum. To be on the safe side, I used a reference file with an Open Audio License from Wikimedia:

http://en.wikipedia.org/wiki/File:Pietro_Mascagni_-_Cavalleria_Rusticana_-_Intermezzo_Sinfonico.ogg

It doesn’t have to be the same song, but it’s reasonable to assume that reference based automatic mastering will give best results if the reference is playing the same notes as your project. In most cases, it may be enough that the reference has the same style or genre or instruments, i.e. it is enough that it sounds similar to what you want. What I’ve done is closer to exactly what I want.

AAMS Intermezzo Spectra

AAMS Intermezzo Spectra. The Red and Blue plot shows the spectrum of my project while the yellow and green plot shows the spectrum of the reference obtained from Wikipedia. Overall, they look similar, even rising and falling at the same frequency regions.

AAMS Intermezzo Suggestions

AAMS Intermezzo Suggestions. How my project audio file should be equalized. Notice how there are a lot of frequency bands. I wish I understood what this plot means beyond knowing that it looks cool.

Interestingly, the spectra didn’t differ so much meaning that the SSO + Freeverbtoo combo already sounds realistically mastered enough to begin with. That may also be the virtue of classical orchestra music being more reproducible, keeping the recording setup as simple as possible and having no special effects whatsoever. But I can still imagine many cases wherein AAMS could be a lifesaver, especially if your speakers/headphones are not the best you could get, then you could still be safe if your song sounds like a professionally mastered reference song through quantitative spectral merits. It may also help if you feel that your sample libraries are not sounding the way you imagine it should.

Conclusions

Although digital MIDI orchestration  motivates us to create our own orchestra compositions, it is  a good learning experience to work on an existing that tune you know well. This raises your standards as you try to best replicate the known song. It is like the difference of drawing a fictional face you imagined, and drawing a portrait of someone you know well. The former does not impose a definite right and wrong. Making covers also makes you aware of what the tools are capable of, and what are their limitations.

Happy music making!

  1. No comments yet.
  1. No trackbacks yet.

Leave a comment