Teach Leap New Gestures

The framework LeapTrainer.js allows to teach a programme new gestures. In the video it seems that it works nicely, in reality it does not work that well. Improving the learning mechanism would be crucial to use it in my project.

The corresponding code can be found on github.

What’s your Gesture?

Video Research: What’s your Gesture from Jones Merc on Vimeo.

To see if gesture trends can be found I asked multiple people to perform a gesture to each of eleven terms. They were asked to only use one hand and did not have time to think about the gesture in beforehand. I found that in some cases the results were pretty predictable and most of the gestures were alike. Other terms provoked a high diversity of different gestures and sometimes very creative gestures were used.

Another finding was that the interface that would be controlled with such gestures would directly influence the gesture itself. A lot of people asked how the object to control would look like and said that they may would have come up with different gestures if the’d seen the object or the interface respectively.

To see the full length gesture video go here.

Classification of Gestures

classification of gestures

Interesting classification of gestures in subcategories:

  • pointing – used to point object or indicate direction.
  • semaphoric – group which consists of gesture posture and dynamics of gesture, which are used to convey specific meanings. Example: Swipe Gesture.
  • iconic – used to demonstrate shape, size, curvature of object or entities.
  • pantomimic – used to imitate performation of specific task or activity without any tools or objects.
  • manipulation – used to control the position, rotation and scale of the object or entity in space.

Source: Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh. Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Technical report, November 2012.

Affording Gestures

Can we design affordances for gestures without a tangible object to interact with? Can interfaces be built with more or less clear indications how a gesture should be done? This question is strongly linked to the question how one can teach a user a (new) gesture-language.
An article about affording horizontal swipe gestures on touch screens is found here. From personal experience I can say that (either because it’s true affordance or because user learned how to interact with such interfaces) the depicted interfaces indeed show some swiping affordances. Can this also be achieved for mid-air micro-gestures?