Prototyping Object Selection

Video Documentation: Prototyping Object Selection from Jones Merc on Vimeo.

Yesterday the thought to use only one Leap Motion came up in a discussion. There are some advantages but also some downsides linked to such a setup.
A plus could be seen in the fact, that the technological mess behind three leaps (three computers, sending sound and text commands back and forth via shiftr between those computers) would definitely decrease. Another big advantage is, that I would have the possibility to display simple visual feedback to the user about the state and the tracking of his hand. This could help communicating that even slight finger movements are tracked and not only big arm movements.
(With three Leaps and three computers – of which only one is attached to a beamer — it would be practically impossible to display the finger movement in realtime, because all the tracking information would have to be sent to the «beamer-computer» and interpreted there. If I only had one leap, I could display the visual feedback all the time.

One big disadvantage would be that one is only able to control one object at a time. Before manipulating the light of the lamp the lamp has to be «selected» somehow. While discussing this matter, the best solution to select an object seemed to be to point at the object. This pointing/selecting would only be possible at a certain height. The hand has to have enough distance from the Leap device. Lowering the hand will «dive into» this object and allow to control only that one.
Unfortunately some gestures could be at the borderline to the selection area: when changing the volume of a song the z-axis position of the hand represents the volume. But if one turns up the volume very much, suddenly the hand will enter the «object selection heigth» and automatically switch the object.
This behaviour can be very well seen in the second part of the video above.

Otherwise the video proves that an object selection could be doable. By moving the hand into the object’s direction the object is selected.
In a further elaboration of this new idea, one could imagine that selection an object would map a projected border around the object (See image below).

berg london light mapping
(Berg London, 2012, https://vimeo.com/23983874)

Another Set of User Testing

user_test_radio

And another user testing with the music player was conducted. This time with well known songs, mostly based on guitar sounds, to avoid interference with the user interface sounds. The sounds weren’t confusing anymore and all the functions were in the end found by the user (with the help of the assistant). Several new findings were noted, the most important is probably about the volume changing gesture. A discussion after the testing led to new approaches.

volume change via rotary knob

Until now I had a gesture implemented, where the user had to imitate grabbing a rotary knob and by rotating the hand either to the right or to the left he could adjust the volume.
Problem: Tests showed, that this gesture is really difficult to perform (on the one hand because it is not really ergonomic and on the other hand because it is a very distinct hand position, which is difficult to track with leap motion).

Together with users new gesture propositions were considered. Here some of the best approaches.

volume adjust - open hand z-axis

Like dimming the lamp, one could just use the distance from the leap position to adjust the volume. The higher the hand, the louder the song. This is – according to user feedback – also well understood, because in orchestras a conductor is also indicating an increase or decrease of the volume by raising or lowering his hand. I already added this control to the prototype.
Problem: The volume is adjusted all the time. If you are about to change tracks (swiping) or if you are performing gestures to communicate with the assistant, you are still adjusting the volume. So if you swipe right and raise your hand a bit during that gesture, you will increase the volume a bit. This is an unwanted fact, which one could overcome by a pre-gesture. Let’s assume one has to hold the open hand in the same position for 2s and only after that one is able to adjust the volume, one could exclude a lot of unwanted behaviour already. Tests will tell if this is true or not.


Edited:
After having programmed half of the described gesture above, I realized that one has to «escape» this gesture again. Wainting 2 sec and adjusting the volume thereafter looks all good, but what if I’m content with this gesture and want to escape the volume changing mode? First solutions that came to mind:

  • Performing any other gesture will escape
  • Bending any finger would stop it (to make sure that Leap Motion is not tracking one finger falsly, I may check if two fingers are bent
  • Taking the hand out of the interaction box (field where hand is tracked) will escape
  • Moving the hand sideways more than a certain amount will escape as well

All of those ideas together may solve that problem of escaping the volume for a big fraction of the users. But again testing will show if that’s the case.


volume adjust hand tilting

Another approach, which was discussed with test-participants, includes a gesture where one tilts the open hand either to the right or to the left. As long the hand is tilted to the right the volume is incrementally turned up. Going back to a horizontal position of the hand (neutral position) does not influence volume and tilting to the left will turn down the music. But some weaknesses were detected as well.
Problem: Again, when performing other gestures, it is very likely that a tilted hand is detected. When swiping left or right the hand is often held in a vertical position. But as well by implementing a certain time to get into the volume-change mode could help to get a less interfering gesture. If one has to do that anyway though, I assume that the high/low is more intuitive.

volume_fist_z_axis

Furthermore I thought about a certain hand position, which has to be performed, to adjust the volume. For example performing a fist will trigger the volume adjustment mode and the higher the fist, the louder the music and vice versa.
Problem: Unfortunately making a fist is also responsible for pausing a track. So always when you would like to adjust the volume you are pausing the current track. This idea is therefore not practicable.

Disassembling the ventilator

disassembling ventilator object hacking

On the one hand I need to change the look of the ventilator a bit and on the other hand I need to hide the button, as the ventilator will be switched on and off via gestures and not via buttons. So I had to disassemble the ventilator.