Archive for April, 2013

Intermezzo (case study)

April 26, 2013 Leave a comment

Cavalleria Rusticana (~1880). The stairway, lying (resting?) guy and ladies panicking are reminiscent of the last minutes in the Godfather III.

The song

Warning: If you haven’t seen them yet, the links that lead to youtube videos can be spoilers.

I wouldn’t have been familiar with this song if not for the movies or TV shows that used it. It is probably best remembered for the tragic ending in The Godfather 3. I have actually seen that scene in a black and white furniture television when I was very young, but my dad, having watched it already changed the TV channel, so I did not remember the song from that movie. Next, I’ve seen my friend playing a black and white video wherein there is this boxer warming up in slow motion. As I eventually found out, that was from the opening credits of Raging Bull. It definitely has cult appeal and  I partially remembered the tune, but I wasn’t interested enough to dig further. It was not until the 31st episode of Rurouni Kenshin, that I really got interested with this song. By that time it has reached 31 episodes, the characters and story must have grown on me, hence making Intermezzo one of my favorite classical pieces. The unfolding story, artistic animation and the music, simply made a very powerful emotional combo (that could probably make a normal person cry).

The score

Since this is an old classic, it is very likely that a computer playable sheet music is available out there. PDFs are available at IMSLP, MIDIs are available from various sources, but the best I was able to obtain was from, creater by MaestroMoi.  Since majority of the transcription has already been done, I had the luxury of neat picking further, almost obsessively, trying to make the score look  identical to the one available at IMSLP, and incorporating the additional woodwinds parts in a much recent transcription, also from IMSLP. I could have just used one of the google-able MIDI files, but that would have spoiled the fun and learning process.

From the score, I learned that the 2nd violins and A clarinets are played divisi (divided further into smaller groups playing different parts, it would look like double stops when notated). Hence, I should balance the volume to avoid making it sound that there are twice as much second violins or clarinets. Articulations markings in the score would also help me decide how to modify the MIDI.

Anyway, here are the files:

  • MuseScore score. Size A3.  I find the A3 size compact enough but still legible when scaling to an A4 printout.
  • MuseScore exported PDF. Note: some symbols are not optimally arranged.
  • MuseScore exported MIDI.
  • Skaiju edited/tweaked MIDI tweaked and articulated with Sekaiju.  This is 4.3 times larger than the “un-articulated” version.

The complexity of the score also helped me learned more of MuseScores features like putting notes on the next staff instead of using more ledger lines, and putting beams over notes in different bars (which starts to look odd if the bar is in a different line or page, revealing room for improvement in MuseScore). I also found it helpful to modify the score layout to make it the page as long as possible so I don’t have to navigate to different lines or pages when editing an instrument part.

Divide and conquer

At some point in the working process, it is easier to use sheet music for the individual instruments, instead of the full score.  Fortunately “parts” were also available from  IMSLP.  Since I focus at one instrument at a time when tweaking MIDI, it is convenient to have the full instrument part in one page. Extracted parts also speed up work by avoiding confusion or distraction from other instruments. It is also helpful to put the bar number in each bar, not just the first bar in each line  (you may want a print out). Since parts are not always available, re-writing the score in MuseScore would also give you this advantage.


The usual suspects plus AAMS:

  • MuseScore: converting the visual score to something that can be made into MIDI
  • Sekaiju: Further MIDI tweaking (expression and articulation)
  • Synthfont: Rendering the MIDI with soundfonts
  • Freeverbtoo: For reverb
  • AAMS: For EQ mastering based on a reference recording


The samples used are mostly from the Sonatina Symphonic Orchestra, with a few exceptions

  • Organ. Jeux d’orgues
  • “Low harp” aCoUsTicBaSs from the Jazz Page. Since the lowest harp notes in the pieces are not audible with SSO.

I’m not very particular about these other instruments since the strings dominate the sound. The strings and woodwinds are also from SSO worked fine. I’m not quite happy with SSO’s harp, but I just let it be since it is not as loud as the rest. For reverb I used a cathedral preset in freeverbtoo. I actually thought whether I should go for an IR convolution reverb, but I probably made a “mistake” by starting out with the snappy and convenient “go-to” freeverbtoo, that it became difficult to make the piece sound the way I like with other reverb VSTs.

MIDI tweaking

Since I’m very familiar with how the song, this proved to require more effort than my usual re-arrangements. In short, I had higher standards because I had an easy and definite way of benchmarking, i.e. listening alongside an actual recording or the my mind’s “ear worm”. The score helped me decide how to tweak the MIDI by:

  • Slightly overlapping slurred/legato notes
  • Modifying the MIDI velocity depending on the note’s dynamics (e.g. p pp ppp). Actually MuseScore will take account of dynamics when exporting to MIDI, but you would still want to make adjustments to get it sounding right.
  • Separating notes that stick end-to-end. If the same note/pitch is played in succession without a gap in between, weird buzzing sounds would result sometimes. Also, it is likely that musicians in the real would also make a short pause in such cases. Can anyone bow the same note twice without pausing in between? Probably not, but the natural reverb and decay of the sound will feel in this gap, just like when a pianist steps on the sustain pedal.
  • Modify Expression (MIDI CC11) to control loudness, e.g. in notated crescendos and decrescendos, and where ever I feel like changing the volume. CC7, volume, also works similarly, but I did not tweak this part. I also used this to minimize the piercing or ringing sound from high notes at the end (from oboe and 1st violin). This also explains how the MIDI file got more than 4 times larger (still small at 47KB) since many data points are used to draw expression curves.
  • Randomize harp note starts by a slight amount to make it sound “human played.”
  • Add the low harp work-around. As I mentioned earlier I couldn’t hear the first low notes of the harp, so I added a supposedly pizzicato contrabass for that part, which was then rendered with an acoustic bass guitar.

I also used CC11 to apply a longer fade out on long notes. Without modifying this volume, the long note will only start fading out close to the end. This release time is about 0.5 seconds for SSO. It may not always sound natural if a long note is near full volume for most of the time then, just fade out for a  short time only towards the end. Musicians may play a long fading note. Especially since, unlike MIDI, musicians know beforehand how long the note should be. This may not always be the case though, so you should trust your ears in the end. I think this applies more if there is an anticipation of more silence after the long note, or if the long note ends a phrase.


We’ve had enough talking, let’s now hear the music.

One difference I notice from real recordings is that there are more pronounced solo instruments. I.e., my ears can distinguish a solo violin on top of other violins. The ensemble sound provided by SSO is somewhat more “homogeneous”, or there is no dominating solo instrument within the ensemble. This may have to do with the mic placement in real recordings.

Automatic mastering using reference recordings

I also tried out this new cool tool, the Automatic Music Mastering System. AAMS adaptively applies EQ settings on an audio file based on references or even based on other recordings you have. Unlike the usual EQ with loads of presets, AAMS first analyze the original audio file, and then decides how much of each frequency band to boost or attenuate. And it uses a lot of frequency bands instead of just a typical smoothened out curve with 3 to 10 peak regions. What I’ve done is choose a reference recording, have AAMS analyze its spectrum, and make my MIDI orchestation output have the same spectrum. To be on the safe side, I used a reference file with an Open Audio License from Wikimedia:

It doesn’t have to be the same song, but it’s reasonable to assume that reference based automatic mastering will give best results if the reference is playing the same notes as your project. In most cases, it may be enough that the reference has the same style or genre or instruments, i.e. it is enough that it sounds similar to what you want. What I’ve done is closer to exactly what I want.

AAMS Intermezzo Spectra

AAMS Intermezzo Spectra. The Red and Blue plot shows the spectrum of my project while the yellow and green plot shows the spectrum of the reference obtained from Wikipedia. Overall, they look similar, even rising and falling at the same frequency regions.

AAMS Intermezzo Suggestions

AAMS Intermezzo Suggestions. How my project audio file should be equalized. Notice how there are a lot of frequency bands. I wish I understood what this plot means beyond knowing that it looks cool.

Interestingly, the spectra didn’t differ so much meaning that the SSO + Freeverbtoo combo already sounds realistically mastered enough to begin with. That may also be the virtue of classical orchestra music being more reproducible, keeping the recording setup as simple as possible and having no special effects whatsoever. But I can still imagine many cases wherein AAMS could be a lifesaver, especially if your speakers/headphones are not the best you could get, then you could still be safe if your song sounds like a professionally mastered reference song through quantitative spectral merits. It may also help if you feel that your sample libraries are not sounding the way you imagine it should.


Although digital MIDI orchestration  motivates us to create our own orchestra compositions, it is  a good learning experience to work on an existing that tune you know well. This raises your standards as you try to best replicate the known song. It is like the difference of drawing a fictional face you imagined, and drawing a portrait of someone you know well. The former does not impose a definite right and wrong. Making covers also makes you aware of what the tools are capable of, and what are their limitations.

Happy music making!


Farewell (case study)

April 20, 2013 Leave a comment

After a long break from making music due to my thesis, here I am again! This time I’ve decided to share my workflow on a recent project. You can listen to the final result below:

The song

Album art from the Escaflowne Original Soundtrack 3 from which the original music can be found.

The song is part of the Vision of Escalowne‘s original soundtrack (third CD album). I’ve seen this anime more than a decade ago, made cassette copies of my sister’s CD, and listened to it regularly back in highschool. For a cartoon, it has a soundtrack that goes beyond what you would normally expect. It’s one of the reasons I got hooked into orchestra music (as opposed to the serious and profound classics that is less accessible to my less mature mind back then). It’s composed by Yoko Kanno and Hajime Mizoguchi and (probably) performed by Warsaw Philharmonic Orchestra.  The  score was based on this transcription by ThePochaccos and all thanks to him/her for doing a great job.

Unfortunately it is not a public domain score, so I won’t be able to share it that easily. But basically, I transcribed the video using MuseScore. It is not that difficult, there are parts in the piano that need “voices”, but the whole piece is mostly strings (for me., that’s actually easier and more fun than emailing the youtube user). (I hope to share something public domain next time).

I’ve been itching to get Reaper for a long time already, but so far, my projects are not too complex, so  this setup is still fine.

Sonatina Symphonic Orchestra

For this song, I decided to do a proper demo of SSO. I’ve always used multiple orchestra samples and layer the different results in Audacity. Using SSO by itself has also revealed some of its shortcomings. Sometimes the release does not sound well, giving an unnatural sound at the end of the note (i.e. in long contrabass notes or in violas). I work around this by shortening the note until the odd sounding part is no longer audible. The slow attacks also made it less favorable for fast short notes, making them sound mushy, and making the melody less defined. I try to remedy this by increasing the note velocities or decreasing the note velocities of the background instruments. I wonder whether using a VST SFZ player, instead of Syntfont’s native sfz support, might give better results, but I have not really explored this option.

Manual looping*

Update: As it turns out, this manual looping workaround was unnecessary ans I should apologize for the misinformation. My mistake was to directly load the sfz file to Synthfont (an older version back then) instead of using an sfz playing VST. SSO strings loop nicely with Plogue sforzando. The text is maintained for historical purposes and as it may still be helpful for other libraries that do not have looping in them or, for another application, to hide the distinct repetitive sound of looped samples.

Another shortcoming of SSO is its lack of looped samples for some instruments, which has forced me come up with a tricky work around. The first note of the first violins is 9 bars long (31 seconds at 70 BPM!). At first I thought of editing the SFZ samples, but that seemed overkill for a single note. So next, I imgained how a real orchestra would actually play a half minute note. If it were a single violin, the maximum amount of time you could slide the bow over a string would be limited by the bow’s length and the minimum bow velocity needed to produce an acceptable sound, maybe five or 10 seconds (my imagination’s approximation). Restarting the bow slide would have made a new note. But an ensemble of more than 10 violins doesn’t have to simultaneously restart their individual bowing. So while one violinist restarts there are about 10 others who are still bowing midway, hiding the restarting guy and creating an illusion of continuity. That’s my guess.

Back to the MIDI editor, I implemented “manual looping” by making an extra first violins track (not to be confused with the second violins in the score). I broke the whole 9 bar note into shorter segments that can be played by SSO. The extra violin track continues the note when it’s about to end in the original violin track. Then the original violin track continues the note when the extra violin track’s note is about to end. Hence, by alternating and overlapping these two violin tracks for the same note, I get a manually looped violin note. To mask the attacks of this repeating violin, I align them with the attacks (note start) of the other instruments in the score. Of course, these alternating violin tracks must have the same volume and panning and go through to the same effects chain.

It may also be worth noting that the SSO updated sustain violins worked better for this trick.

Guitar = Guitar Pro

Since I can’t compete with a real orchestra, I generally avoid making inferior copies of something that it is already great (except for personal studies or demos). Who would listen to that? At the very least, I would change an instrument to give a different feel that is worth listening to. Hence, I changed the piano part into guitar. Being a more common and accessible instrument, and being a long time guitar player myself, I could relate more with the sad sound of a guitar.

I’m also known among my friends for advocating Guitar Pro (GP) as a virtual guitar addition to their DAWs. Even though I play guitar well (used to?), and own many guitars (too many to remember), recording guitars with my limited laptop studio setup has never given me satisfying results. GP actually started as a tablature study program (coincidentally at the age when I was crazy about studying guitar tabs). When it started out, there was a free alternative that can do as much, Tux Guitar. But since Guitar Pro’s introduction of RSE (realistic sound engine), it has, in my opinion, left Tux Guitar far behind. GP would not integrate with a DAW like a VST or soundfont, but it’s notation based interface, optimized for guitar articulations, makes it far more intuitive than any VST I know. GP can simulate vibrato, hammer on/pull offs, ringing, chord “brushing”, harmonics and many more with a few mouse clicks and without having to tweak MIDI parameters.  And the demos sound realistic enough for me (listen or download here). It’s probably the guitar equivalent of Finale + Garritan combo, but at a price below a hundred dollars. Software that unify sophisticated music notation and virtual sound production is really something we should be thankful for (although I would also hope for piano roll integration).


There’s nothing really new here, but I would be happy if a newcomer in digital music production/midi orchestration would learn something from this. Note that the only tool that costs money is Guitar Pro, although I’ve also donated a small amount to Synthfont as it is very useful to me and it was the first thing to exactly match what I was looking for before, a simple tool that applies soundfonts and VSTi’s to an existing MIDI file that is not as overkill as a full blown DAW. With diligence, passion and knowledge of what great tools are available out there, making quality music, one that you can mix in to your iPod or MP3 player, is no longer a thing that can only be done with professional thousand dollar studios.

Happy music making!