Classification of Gestures

classification of gestures

Interesting classification of gestures in subcategories:

  • pointing – used to point object or indicate direction.
  • semaphoric – group which consists of gesture posture and dynamics of gesture, which are used to convey specific meanings. Example: Swipe Gesture.
  • iconic – used to demonstrate shape, size, curvature of object or entities.
  • pantomimic – used to imitate performation of specific task or activity without any tools or objects.
  • manipulation – used to control the position, rotation and scale of the object or entity in space.

Source: Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh. Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Technical report, November 2012.

Affording Gestures

Can we design affordances for gestures without a tangible object to interact with? Can interfaces be built with more or less clear indications how a gesture should be done? This question is strongly linked to the question how one can teach a user a (new) gesture-language.
An article about affording horizontal swipe gestures on touch screens is found here. From personal experience I can say that (either because it’s true affordance or because user learned how to interact with such interfaces) the depicted interfaces indeed show some swiping affordances. Can this also be achieved for mid-air micro-gestures?

Use a Micro-Gesture to start an application’s feature

Video Sketch: Start Shazam’s Listening Feature via Gesture from Jones Merc on Vimeo.

What if one could define specific gestures to start specific features of an application.
This would be a diversification of the smartphones interface (see also: Standardised vs. Diversified Interaction), because one could skip all the buttons which one would normally need to navigation into the app and to a particular action or a feature.
In the video sketch I show how it could feel if the cupping-your-hand-gesture would initiate shazam’s song-listening feature. Especially in that use-case one is glad if as little time as possible is needed to start the listening function (otherwise the song may be over already).

Standardised vs. Diversified Interaction

Diversificated-Interaction-Process_normal

Diversificated-Interaction-Process_diversified

Highly standardised multi-functional devices like smartphones or computer sometimes need quite tedious actions to finally get to the action/control you want to execute. Because it’s multifunctional touch-buttons (in the case of a smartphone) need to tell the device what application and what action should be executed. It narrows down the options continuously.

If such devices could also be controlled in a diversified way (concerning interactions) for example via micro-gestures which offer far more possibilities, one could skip lots of steps. One specific gesture could mean: Go to app A, start feature B and choose option C.

Of course further question arise within that use case, for example what happens if a gesture is too close to a everyday life gesture and may start a process unintentionally.
For that case a preceding gesture could solve the problem. Just like saying «OK Google» initiates voice control in google services a very specific and unique gesture could start gesture recognition.