Sunday, January 13, 2013

A Series: Basics of DIY Wind Controllers

Over the past few years I've spent some time thinking about and designing wind instrument controllers. I thought a good way to give something back would be to produce a series of blog posts describing what I've learned while tinkering with my projects.

As it turns out, I'm not a very skilled craftsman, so most of my projects end up looking like something Homer Simpson would have created. But I think I've learned a few things about building the electronics and writing code. So, in this series of posts, I'm going to concentrate on sharing some basic building blocks and core concepts, like the following:
  • Why electronic wind instruments are hard
  • Breath Sensing 101
  • Mapping analog readings to MIDI continuous controller values
  • MIDI note selection methods
  • Using sensors to alter performance data in real time
  • And more, as I think of them

Since I've been using Arduino and Teensy microcontrollers to do my experiments, I'm going to focus on those, so the code examples will target those platforms.

If you are a performer who uses an EWI, EVI, or WX-series controller, there won't be a lot of practical advice for you here, but the circuits and code may help you understand what's going on inside your instrument. Also, the posts on building synth patches that work well with wind controllers will certainly be applicable to your live rigs. So, please - read on!

So, onto post number one:

Why Electronic Wind Instruments are Hard

First of all, if you're not familiar with what an electronic wind instrument is, I'll define the term.

An electronic wind instrument is a musical instrument that employs electronics to produce the instrument's sound, and is articulated by blowing into the instrument.

There are a number of commercially available electronic wind instruments. The most common instruments are the EWI series from Akai, and the WX-5 from Yamaha. Both are woodwind-style controllers - that is, they are fingered in a way that is easily learned by someone who knows how to play the saxophone, clarinet, or flute. The Akai instruments also support a mode that is more natural for trumpet players to use.

The Akai instruments are the latest in a long line of wind controllers that started with Nyle Steiner's work in the 1970s. For more information on this history of the Steinerphone/EWI, see the Nyle Steiner home page. For more links to learn about wind controllers, check out the Wind Controller links page from Patchman Music. These two paragraphs don't come close to describing the history of wind controllers, but the links page on the Patchman site is an excellent resource to learn more.

(Aside: Nyle Steiner also invents lots of other crazy stuff. And he's a ham like me.)

ADSR


The majority of electronic instruments you can buy are really good at emulating instruments that can be modeled with the ADSR model (Attack, Decay, Sustain, Release):

 
This model describes how a sound evolves over time. For example, when you hit a key on a piano, there is an initial attack A, when the piano's hammer hits the string. After the initial strike of the hammer (the attack phase of ADSR), the string starts vibrating, and the vibration starts to lose energy. In most cases, the majority of the string's vibrational energy dissipates quickly (the D - decay phase), but then the string continues to vibrate at a lower volume, fading out gradually (the S - sustain phase). When the key is released, the piano's felt damper touches the string, stopping the vibrations (the R - release phase). Most synthesizer patches have a fixed ADSR envelope, a "recipe" for the sound as it progresses through time. For a plucked or struck instrument, the performer has some control over the duration of these phases, and can also exert some control over the initial input, e.g. how hard the string is plucked or how hard the drum head is struck.

ADSR works really well for modeling instruments that are plucked or struck, which includes most of the staple instruments of popular music like:
  • Guitar
  • Bass
  • Drums
  • Piano and other keyboards
Wind instruments, on the other hand, don't follow this model at all. The sound is produced by a column of air emanating from the performer's lungs, which in turn causes something to vibrate - a single reed (clarinet/saxophone), two reeds (oboe/bassoon), lips (trumpet/horn/trombone/tuba), or the air column itself (flute/recorder). Articulation (the starting and stopping of sound) is generally accomplished by interrupting the stream of air using the tongue. The ADSR model simply doesn't reflect the way wind instruments work. It also doesn't model the way that bowed instruments like the violin make sound either.

Due to the popularity of the instruments that ADSR models well, manufacturers of electronic musical instruments have generally not found other types of instruments to be commercially viable. Yamaha and Akai have a series of wind controllers that emulate woodwind instruments, and some smaller companies produce small quantities of instruments that emulate other types of instruments, including trumpets and violins, but for the most part, wind players have not been invited to the electronic music party until they learn to play a different instrument.

For this reason, even if you have a wind instrument controller like one from Yamaha or Akai, you're faced with the difficult task of finding synthesizer patches that work well with your controller. If you *want* to sound like a Fender Rhodes electric piano, no problem, but if you want to make that Rhodes fade in from nothing, swell up, and fade out, sorry, you're out of luck. The ADSR envelope of the Rhodes patch you have models the characteristics of the real Fender instrument.

I'll cover this topic in more detail in a later post, but the important thing to remember is that if you want to play your wind instrument controller in an idiomatic way, you're going to have to either go find some patches specifically designed for wind controllers, or build your own.

I should also mention that my work has focused on building instruments that send MIDI data, but an equally valid approach is to build instruments that send raw sensor data to a device that makes the sound itself, rather than relying on a MIDI synthesizer to make the sound. The original Steiner EWI and Akai variants had a dedicated synthesizer that directly read the instrument's sensors. Another option is to feed all the sensor outputs to a computer. The computer, in turn, uses a sound system like pd or Max to realize the sound. There are a number of artists using that approach, but one of the most exciting, in my opinion, is Onyx Ashanti, who is really pushing the envelope on the form factor for wind controllers. He started with a Yamaha wind controller, deconstructed the functionality it provided, scratched some personal itches he had with performing live, and arrived at the Beatjazz Controller. I encourage you to follow his work.

What's Next


In the next post, I'll select an inexpensive sensor that you can use to sense breath pressure in a wind controller. We'll cover how to connect it to an Arduino or Teensy controller, and how to connect tubes to the sensor so you can blow into it and measure the breath intensity.

14 comments:

  1. Some tracks I made last year, using my DIY wind controller. https://soundcloud.com/fundorin

    ReplyDelete
  2. Very nice, Alexander! Is your controller's design published on the web?

    ReplyDelete
  3. I stumbled into your Blog, great treasure -Trove of information!
    Thanks!

    ReplyDelete
  4. Hi. Maybe this page will be interesting to you.
    http://lowcostewi.dragoljub.in.rs/

    ReplyDelete
    Replies
    1. Very cool - a home-built EWI with touch-sensitive keys. Thanks for sharing that!

      Delete
  5. Currently working on a similar project, except mine will replace the tube with a real trumpet mouthpiece. I feel that this will allow one to better capture the subtle expression of a trumpet.

    Gordon, a question for you - what sort of pressure sensor or microphone should I go for to accomplish this task? I'm not sure that the Freescale would have the response necessary to register the higher frequencies put out by a mouthpiece. It seems built for more static applications.

    ReplyDelete
  6. DBaylies - great to hear about your project. When you can, share a link here to anything you have on-line about it.

    Are you planning to have the player buzz their lips just like a conventional trumpet? I'm trying to get a feel for whether you are trying to actually sample the audio-frequency pressure variations inside the instrument's air column, or if you want to just detect pressure and maybe use that to alter the sound in some way.

    I have (somewhere, but can't find it now), an "octave divider" made by, I think, Electro-Voice, from the 1970s. You drilled a hole into your trumpet mouthpiece, attached a transducer to the mouthpiece, plugged it into an external box of analog circuitry, and it could transpose the sound up or down by an octave or two. It seems like the general idea of having a transducer/microphone attached to the side of a brass mouthpiece should work. I used this with my trombone to "play tuba" for a recording session. It worked quite well!

    ReplyDelete
    Replies
    1. Thanks for the reply!

      Yes, I'd have the player buzz their lips into the mouthpiece just like with a conventional trumpet. The difference is that there would be no horn - only a mouthpiece.

      I'm not sure what the difference between "sampling the audio-frequency pressure variations" and "detecting pressure" is - but I think I'm after the first one. I don't want to alter the dry trumpet sound, I want to convert the pressure output by a player buzzing on a mouthpiece to MIDI signals and use that to drive a softsynth.

      That octave divider looks interesting! It would be fun to try. Also, I agree that having a transducer on the side of the mouthpiece is probably the best method. Any recommendations for relatively cheap transducers I could buy?

      Thanks for the input, it's much appreciated.

      Delete
    2. Unfortunately I've never done any experimentation with microphones/transducers, so I can't offer any advice. But it does sound like you want to convert pitch to MIDI. Have you looked at any of the existing pitch-to-MIDI converters on the market? For example: https://itunes.apple.com/us/app/midimorphosis-polyphonic-audio/id495856824?mt=8 seems to work pretty well.

      Delete
    3. Thanks for the link! Your idea led me to spend a few hours experimenting with available software. The best combo I could make was the Yamaha Silent mute with WIDI (http://www.widisoft.com/english/widi-audio-to-midi-vst.html). It gave me okay results, with the primary issue being lag (about 0.25 seconds). Additionally, I could find not programs that offered anything other than note conversion, whereas I'm after more detail (velocity, aftertouch, pitchbend, etc.).

      I've decided to use a mic with the Teensy 3.2 (with audio shield), and use an FFT to calibrate MIDI variables as necessary. I can keep you updated if you'd like. Let me know if you find any more info on workable realtime audio to MIDI conversion!

      Delete
    4. I also have a Yamaha Silent Brass mute, and I've had a lot of fun running my trombone sound into audio effects processors.

      The Teensy + audio shield might be able to achieve lower latency on the pitch-to-MIDI conversion, since there is no operating system to deal with. The Teensy Audio Library (http://www.pjrc.com/teensy/td_libs_Audio.html), if I recall, does include a pitch analysis module, so that could be a good place to start.

      Another idea, if you're up to coding, would be to implement this with a Bela board, a recent KickStarter project: https://www.kickstarter.com/projects/423153472/bela-an-embedded-platform-for-low-latency-interact/description.

      It benefits from a fast processor like found on the BeagleBone, but the audio signal chain bypasses the kernel and offers much lower latency. Not knowing anything about the WIDI VST you have, the OS-introduced latency may be the problem. Or, pitch-to-MIDI might just be very expensive. I'm not sure, and I'm no expert.

      Another idea you might consider, is to find some proxy for the frequency of the pitch produced by the instrument. For example, if you could measure the volume of the oral cavity and also the velocity of the air exiting the mouthpiece, might that be able to select a pitch? I've thought about that a bit, and I recall someone on the windcontroller mailing list prototyping it, but I can't find it at the moment.

      Delete
  7. The Bela is very interesting! Thanks for telling me about it - I'll certainly consider it should I run into any latency issues. It seems especially well-suited for my purpose.

    Measuring a proxy could work, although I feel that measuring the air pressure fluctuations in the shank of the mouthpiece is the most foolproof way to get as much sound information as possible.

    I'm currently waiting for parts to come in, and I can you updated on my progress.

    ReplyDelete
    Replies
    1. Excellent, I'm excited to hear what you learn. It sounds like a hard problem!

      By the way, I found the octave divider and mouthpiece transducer I mentioned in an earlier comment. It's a Vox Ampliphonic Octavoice II. and there is information about it here: http://www.voxshowroom.com/us/amp/octavoice.html.

      Maybe a little googling about this device. will provide some hints on how to attach a pickup to the mouthpiece. I'll also make a short post with some pictures of the device for you.

      Delete