Teach Leap New Gestures

The framework LeapTrainer.js allows to teach a programme new gestures. In the video it seems that it works nicely, in reality it does not work that well. Improving the learning mechanism would be crucial to use it in my project.

The corresponding code can be found on github.

Affording Gestures

Can we design affordances for gestures without a tangible object to interact with? Can interfaces be built with more or less clear indications how a gesture should be done? This question is strongly linked to the question how one can teach a user a (new) gesture-language.
An article about affording horizontal swipe gestures on touch screens is found here. From personal experience I can say that (either because it’s true affordance or because user learned how to interact with such interfaces) the depicted interfaces indeed show some swiping affordances. Can this also be achieved for mid-air micro-gestures?