These two techniques are useful if you are trying to play an existing synthesizer patch that is not specifically designed for wind control. They still leave a lot to be desired if you want to create an instrument that plays like an acoustic wind instrument, but I'll cover that topic in a later post.
An Inventory Of Expression Parameters
Before I go much further, let's take a moment to understand the various ways the the MIDI spec conveys expressive data, and how that MIDI data relates to a breath sensor.
Note On Velocity
With a velocity-sensitive keyboard, the MIDI note on velocity describes how hard the key on the keyboard was pressed (strictly speaking, it's the speed with which the finger moves the piano key through its range of motion - that's why it's called "velocity" and not "force"). For percussive instruments like drums or keyboards, it usually controls the overall volume of the ADSR envelope. Note on values range from 0 to 127, with 0 representing the lightest possible touch, and 127 representing the maximum force.
For a wind controller, it's not so clear. When a wind player articulates a note, the note starts from zero breath and increases at some rate controlled by the player. In the case where the player is using a hard articulation (that is, one with a fast rise time), it may make sense to try to map the rise time of the sensor to a note on velocity. This can be helpful if you are trying to play a patch that behaves like a percussive or plucked instrument. We'll implement this in the first sketch in this series, and see how it works out. As a hint, it's kind of a dead-end if we really want to build an instrument that responds like a real-world wind instrument, but the concepts here are generally useful, plus if you really do want to play a piano or drum patch, it'll help you play that patch with more expression.
There is also a corresponding note off velocity. I don't think this is widely used in synth patches, but it would control the duration of the R phase of the ADSR envelope. We won't implement it in our instrument's code, to keep things simple.
Aftertouch
In addition to detecting how hard a key was initially pressed, many MIDI keyboards can detect how hard a key is being pressed while it it being held down and transmit that data via MIDI aftertouch messages. With many synth patches, pressing harder on the key will engage a vibrato effect like the modulation wheel.
With our wind controller, It's pretty simple to send aftertouch data that corresponds to the breath values. So, if a patch is configured to respond to aftertouch, we should be able to make it respond to how hard we blow after starting the note. We'll implement that in our sketch and see how it affects a few different synthesizer patches.
Continuous Controllers
We saw how continuous controllers work in a previous post. They are MIDI messages that aren't associated with any particular note. There are many different continuous controllers defined (here's a complete list). Some synthesizers, and especially software-based synthesizers, allow you to route any continuous controller to any synthesizer parameter. When we get to the post about designing synth patches for wind control, we'll see how powerful this capability is.
Mapping Breath to Note On Velocity
Here's a sketch that will allow you to play MIDI patches that respond to note on velocity. Most piano, drum, and guitar patches will probably respond to note on velocity information.
The basic approach is to take the reading from the breath sensor at the time the note begins, and map that to a value in the range 0 to 127 (the minimum and maximum allowable MIDI note on velocity values), and send that with the note on event. We can use the Arduino map() method to do that math, like we did in our continuous controller sketch.
The only slightly tricky thing is that, when the breath starts, it takes a few milliseconds to build to its final value, so if we use the first value we read, it won't be as "loud" as the player intended. To get around that, we actually wait a little while after we see the breath go above the note on threshold value, and then re-sample the pressure sensor. The second sample is the one we map to the velocity.
The other concept in this sketch that you may not be familiar with it the use of a very simple finite state machine. While it may sound complicated, it's actually pretty simple. At any time, the sketch is in one of three states:
- Note Off State - no note is sounding
- Rise Time State - the performer has started to blow into the sensor, and we're waiting a bit for the pressure to rise to its final value
- Note On State - a note is sounding
Current State Input Next State
======================================================
NOTE_OFF Breath value goes above RISE_TIME
NOTE_ON_THRESHOLD
RISE_TIME RISE_TIME milliseconds NOTE_ON
have elapsed
RISE_TIME Breath value goes below NOTE_OFF
NOTE_ON_THRESHOLD
NOTE_ON Breath value goes below NOTE_OFF
NOTE_ON_THRESHOLD
Another way to visualize a state machine is with a state graph. The states are shown as ovals, and the transitions between states are the edges (curved arrows) in the graph. Note that two of the states have transitions that point to themselves (I didn't include those in the table above). That's perfectly ok and shows that some inputs might not cause a transition out of the current state. Click the diagram to see a larger version if it's hard to read.
Here's the sketch:
#define MIDI_CHANNEL 1 // The threshold level for sending a note on event. If the // sensor is producing a level above this, we should be sounding // a note. #define NOTE_ON_THRESHOLD 80 // The maximum raw pressure value you can generate by // blowing into the tube. #define MAX_PRESSURE 500 // The three states of our state machine // No note is sounding #define NOTE_OFF 1 // We've observed a transition from below to above the // threshold value. We wait a while to see how fast the // breath velocity is increasing #define RISE_TIME 10 // A note is sounding #define NOTE_ON 3 // The five notes, from which we choose one at random unsigned int notes[5] = {60, 62, 65, 67, 69}; // We keep track of which note is sounding, so we know // which note to turn off when breath stops. int noteSounding; // The value read from the sensor int sensorValue; // The state of our state machine int state; // The time that we noticed the breath off -> on transition unsigned long breath_on_time = 0L; // The breath value at the time we observed the transition int initial_breath_value; void setup() { state = NOTE_OFF; // initialize state machine } int get_note() { return notes[random(0,4)]; } int get_velocity(int initial, int final, unsigned long time_delta) { return map(final, NOTE_ON_THRESHOLD, MAX_PRESSURE, 0, 127); } void loop() { // read the input on analog pin 0 sensorValue = analogRead(A0); if (state == NOTE_OFF) { if (sensorValue > NOTE_ON_THRESHOLD) { // Value has risen above threshold. Move to the RISE_TIME // state. Record time and initial breath value. breath_on_time = millis(); initial_breath_value = sensorValue; state = RISE_TIME; // Go to next state } } else if (state == RISE_TIME) { if (sensorValue > NOTE_ON_THRESHOLD) { // Has enough time passed for us to collect our second // sample? if (millis() - breath_on_time > RISE_TIME) { // Yes, so calculate MIDI note and velocity, then send a note on event noteSounding = get_note(); int velocity = get_velocity(initial_breath_value, sensorValue, RISE_TIME); usbMIDI.sendNoteOn(noteSounding, velocity, MIDI_CHANNEL); state = NOTE_ON; } } else { // Value fell below threshold before RISE_TIME passed. Return to // NOTE_OFF state (e.g. we're ignoring a short blip of breath) state = NOTE_OFF; } } else if (state == NOTE_ON) { if (sensorValue < NOTE_ON_THRESHOLD) { // Value has fallen below threshold - turn the note off usbMIDI.sendNoteOff(noteSounding, 100, MIDI_CHANNEL); state = NOTE_OFF; } } }
Here's how it sounds. I play four notes quietly, four notes loud, four more notes quietly, and then do four notes of increasing volume (a crescendo, in musical terms), and eight notes of decreasing volume (a diminuendo). The sketch is randomly picking which notes to play, but the attack velocity of each note is under my control.
Looking at the MIDI data, it looks like the values are ranging from a low of about 4 to a maximum of about 100, so I could alter the MAX_PRESSURE value in the sketch to fix that. But you get the idea.
Here's another rhythmic motif in 9/8 that I played on a different wind controller, a Yamaha WX-7, that also maps breath to note on velocity. Because of this mapping, I'm able to accent the 1st, 4th, 6th, and 8th notes of each group of nine, resulting in a 3 + 2 + 2 + 2 pattern.
In part two, I'll add MIDI aftertouch to this sketch.
Thats exactly what i am looking for. thank you very much. i have an interesting project and need this informations to finetune my output. i will share it with you if it works well enough to share :)
ReplyDeleteDirk from Germany
Thanks, Dirk! Very interested to see what you come up with.
DeleteAt first i am reading your whole blog :)
ReplyDeleteDirk
hi gordon,
ReplyDeletei want to build an electric ocarina.
why?
- because i dont want to disturb anyone if i am practicing :)
- bacause it isnt buyable :)
features?
- it should fit into a real alto ocarina
- no extern devices are needed, so i can play wherever i want to
- audio via headphones
- every fingercombination should be playable, like a real ocarina
my idea?
- everything is powered by a PowerBoost 1000C and a lithium battery
- the ocarina is designed by myself with Autodesk Fusion 360 and 3d printed by my workmate
- pressuresensor mpx5010g detects when i am blowing
- touchsensor mpr121 detects which fingerholes are open
- i have solved a mathematical optimization problem which hole on my alto c ocarina has which frequence
- add the frequences of the opened holes
without midi?
- my arduino nano sends a tone() signal with the frequencesum to the piezo or directly to the headphone
with midi?
- i am checking which noteFrequence equals my frequencesum the best
- sending the selected midiNote to a nanoPi neo
- fluidsynth with an ocarina soundfont running gives me the expected audio output to my headphones
how long do i tinker?
- for about 2 years :)
- i started without any knowledge :)
is it finished?
- both variants are playable, but finally i have to build one variant inside the 3d printed ocarina
pro midi?
- better sound than piezo because of good soundfonts
- changable soundfonts: maybe want to play a flute, a saxophone or even a piano
- i could play chords (more than one note at a time) as a special feature
contra midi?
- i need a nanoPi neo that means i need more room inside my ocarina
- more wires means more complexity and more sources of errors
- i takes more than 40 seconds after powering before the nanoPi Neo is ready to play midi (this could by optimized)
future specials?
- something to change the fingering system
- something to change the octaves
- something to change the pitch
- extra pressure- and touchsensors to simulate a double (triple, quadruble) ocarina
- something to change the volume
- something to change the instrument (if midi)
super fufure specials?
- an onboard looper to make incredible arrangements :)
questions?
are there other possibilities than my nanoPi neo to play midi?
i mean i only need the nanoPi neo to run the synthesizer.
thats a big oversized in my opinion.
do you have any ideas or suggestions?
what do you think about this project?
do you know about similiar ocarina projects?
Thank you very much. I love your blog. Helped me so much for organizing my arduino sourcecode :)
Your FiniteStateMaschine works very well. But i had problems with your MidiSendNoteOff.
Therefor i am only using sendAllNotesOff cc = 123.
Dirk
This sounds great, Dirk! Here are some responses:
Delete>why?
>- because i dont want to disturb anyone if i am practicing :)
>- bacause it isnt buyable :
Those are great reasons! I think the best maker projects are the ones where the maker has some personal itch to scratch. In your case, you want to make something that does not yet exist.
>features?
It's great that you are thinking about the requirements for your instrument to be successful.
>- pressuresensor mpx5010g detects when i am blowing
I note that this sensor has a 0-10 kPa range. 10 kPa is pretty high pressure, which I'm not sure you'll generate with your lungs. If there are any sensors with a lower range, you might want to consider them. I'm sure the sensor you've selected will work, but you may be losing some resolution.
>how long do i tinker?
>- for about 2 years :)
>- i started without any knowledge :)
Excellent! Congratulations on your perseverance!
>pro midi?
I think you've addressed all the major points. Making your Ocarina a MIDI instrument will give you a huge variety of sounds you can play. Be aware that many sound libraries are optimized for keyboards and won't respond to breath input. I address this issue in my blog.
>contra midi?
>- i need a nanoPi neo that means i need more room inside my ocarina
>questions?
>are there other possibilities than my nanoPi neo to play midi?
>i mean i only need the nanoPi neo to run the synthesizer.
>thats a big oversized in my opinion.
Take a look at the Teensy Microcontrollers from https://www.pjrc.com - they are really small, they support all the sensors you are using, and they can be configured as USB-MIDI devices, which means you can plug one into your Mac/PC/Linux box running, say, Ableton Live, and it will Just Work. They also boot into your code instantaneously.
This does mean your Ocarina will be tethered to a PC via a USB cable, but you will have access to a really huge repertoire of sounds.
>do you have any ideas or suggestions?
>what do you think about this project?
Yes - once you have something working, practice some piece of music using your instrument, take a video, and post in on YouTube. And post the link here.
Other than that, I think this is great! Keep going, and make something really cool.
>do you know about similiar ocarina projects?
Do you know about Smule's Ocarina app for the iPhone? Here's Ge Wang demoing it: https://www.youtube.com/watch?v=tERtCiAvdfQ
I was lucky enough to attend a workshop at Stanford University's CCRMA where Ge gave a guest lecture.
Thanks for sharing your work, Dirk.