> I can't believe no one has done this before.
One day, I was trying to play a song on my synthesizer which required both hands, but I also wanted to modulate the high pass filter and distortion simultaneously. I had run out of hands. But then I realized, we have other parts of our body that we can use. The most obvious candidate is our feet (with a pedal for instance), but why not take advantage of the most expressive part of our entire body: our face.
It clicked. I just needed to use computer vision for facial gesture mapping, then use those signals to control various midi inputs on my synthesizer.
But I didn't want to hardcode some function for converting from facial mapped features to the midi outputs, so instead I used Wekinator so that a user can easily train their own shallow MLP to convert from face to MIDI space. Wekinator is great because it easily can interpolate complex functions with few shot learning. The user merely needs to demonstrate a few examples of faces they might make and what they want the instrument to sound like, then the MLP will interpolate between the rest.
Check out what I made with it:
Here's some of the code which I wrote in ChucK.
First the program relaying messages from Wekinator to my Synth:
And this is the program which relays messages from facial data to Wekinator: