Video Documentation: Radio with Assistant

Video Documentation: Radio with Assistant from Jones Merc on Vimeo.

Although I removed the assistant from my final project outcome, I still wanted to have a proper documentation of it. In the video above, you can see one possible user flow when interacting with the radio. The assistant helps you find new gestures. The assistant checks the users progress and brings up new hints about still hidden features or gestures respectively. Of course this was a very neat sequence and few users would have found the «OK-gesture» immediately. But even in those cases the assistant would answer accordingly.

Prototyping Object Selection

Video Documentation: Prototyping Object Selection from Jones Merc on Vimeo.

Yesterday the thought to use only one Leap Motion came up in a discussion. There are some advantages but also some downsides linked to such a setup.
A plus could be seen in the fact, that the technological mess behind three leaps (three computers, sending sound and text commands back and forth via shiftr between those computers) would definitely decrease. Another big advantage is, that I would have the possibility to display simple visual feedback to the user about the state and the tracking of his hand. This could help communicating that even slight finger movements are tracked and not only big arm movements.
(With three Leaps and three computers – of which only one is attached to a beamer — it would be practically impossible to display the finger movement in realtime, because all the tracking information would have to be sent to the «beamer-computer» and interpreted there. If I only had one leap, I could display the visual feedback all the time.

One big disadvantage would be that one is only able to control one object at a time. Before manipulating the light of the lamp the lamp has to be «selected» somehow. While discussing this matter, the best solution to select an object seemed to be to point at the object. This pointing/selecting would only be possible at a certain height. The hand has to have enough distance from the Leap device. Lowering the hand will «dive into» this object and allow to control only that one.
Unfortunately some gestures could be at the borderline to the selection area: when changing the volume of a song the z-axis position of the hand represents the volume. But if one turns up the volume very much, suddenly the hand will enter the «object selection heigth» and automatically switch the object.
This behaviour can be very well seen in the second part of the video above.

Otherwise the video proves that an object selection could be doable. By moving the hand into the object’s direction the object is selected.
In a further elaboration of this new idea, one could imagine that selection an object would map a projected border around the object (See image below).

berg london light mapping
(Berg London, 2012, https://vimeo.com/23983874)

What’s your Gesture?

Video Research: What’s your Gesture from Jones Merc on Vimeo.

To see if gesture trends can be found I asked multiple people to perform a gesture to each of eleven terms. They were asked to only use one hand and did not have time to think about the gesture in beforehand. I found that in some cases the results were pretty predictable and most of the gestures were alike. Other terms provoked a high diversity of different gestures and sometimes very creative gestures were used.

Another finding was that the interface that would be controlled with such gestures would directly influence the gesture itself. A lot of people asked how the object to control would look like and said that they may would have come up with different gestures if the’d seen the object or the interface respectively.

To see the full length gesture video go here.

Classification of Gestures

classification of gestures

Interesting classification of gestures in subcategories:

  • pointing – used to point object or indicate direction.
  • semaphoric – group which consists of gesture posture and dynamics of gesture, which are used to convey specific meanings. Example: Swipe Gesture.
  • iconic – used to demonstrate shape, size, curvature of object or entities.
  • pantomimic – used to imitate performation of specific task or activity without any tools or objects.
  • manipulation – used to control the position, rotation and scale of the object or entity in space.

Source: Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh. Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Technical report, November 2012.

Standardised vs. Diversified Interaction

Diversificated-Interaction-Process_normal

Diversificated-Interaction-Process_diversified

Highly standardised multi-functional devices like smartphones or computer sometimes need quite tedious actions to finally get to the action/control you want to execute. Because it’s multifunctional touch-buttons (in the case of a smartphone) need to tell the device what application and what action should be executed. It narrows down the options continuously.

If such devices could also be controlled in a diversified way (concerning interactions) for example via micro-gestures which offer far more possibilities, one could skip lots of steps. One specific gesture could mean: Go to app A, start feature B and choose option C.

Of course further question arise within that use case, for example what happens if a gesture is too close to a everyday life gesture and may start a process unintentionally.
For that case a preceding gesture could solve the problem. Just like saying «OK Google» initiates voice control in google services a very specific and unique gesture could start gesture recognition.