User Tests with Radio Prototype

user test radio prototype teaser image

And at some point he (the programmed assistant) asks questions like «Did you know that other gestures exist?» That’s where I would like to answer, but no answer is expected by the machine. … That’s also confusing.

During the first complete user test with the music player a lot of interesting feedback came together. Beginning with things, which seem quite easy to resolve like the point above — the solution would be not to ask any questions, if the user can’t answer via gesture — going over to other simple statements («The music was too loud») and ending with more complex things like the question if and how much visual feedback is required to generate a pleasant user experience.

At the moment visual feedback is non-existent but substituted by acoustic feedback. Sounds for swiping, changing the volume and switching it on and off are provided. Still they are much more abstract, because the user first has to link a sound to a gesture or to an action respectively. Paired with faulty behaviour of the leap motion tracking device this leads to a lot of frustration. Some of it maybe can be replaced by redesign the assistant and it’s hints. (Maybe even warnings that the tracking is not 100% accurate).

Further user testing will give more insight if and how much the assistant should intervene.
Also, a deeper analysis of the video recordings taken from the test, will help improving the user experience.

Further notations:

  • Text display sometimes too fast
  • Sounds not distinguishable from music
  • Swipe is the clearest sound
  • Not clear why something was triggered
  • Inaccuracy (maybe light situation was not perfect for leaps tracking)
  • Assistant mostly taught the gestures correctly, sometimes the would not trigger due to technical constraints
  • On/Off gesture was not found by chance (in comparison with the lamp where almost all users found the exact same gesture to switch it on or off)

Leap Motion Review

Leap Motion Quality Product Shot High Resolution Gesture Tracking Device

After having worked with Leap Motion for almost 3 months so far, I can say that it indeed does some awesome work tracking hands and fingers. The technology seems to have advanced that far that such products may be used in commercial applications very soon. Although I also have to state that when programming more complex gestures than the ones, which come ready-made with the Leap SDK (swipes, air-taps and circles), it gets very difficult to track them accurately. On the one hand because gestures naturally interfere (forming the hand to a «thumb-up»-gesture will always resemble the gesture of clenching a hand into a fist for example.

This combined with the fact that sometimes the leap sensor misinterprets the position of fingers (e.g. index finger is extended, the Leap Motion says otherwise) makes it even more difficult to get a more or less reliable tracking.

But wouldn’t it be boring if everything would run plain smoothly?

3 Leap/3 Computer Setup

3-Leap-Setup

Because I can only use one leap per computer, I need to setup a rather complex linkage. I will use an iMac and two MacBooks and attach one leap motion to each of them. It gets even more complex because different sounds need to played. The iMac will at the same time show the concept video and therefore the sound of the video will be played via headphones. One MacBook will process the leap input associated with the music player (radio). So the headphones attached to one MacBook will play the Music itself.
This leaves one remaining MacBook to play the interface sounds via some speaker.
To play the interface sounds of the leaps connected with the other two computers, I will probably use shiftr to send play commands to the only computer which will playback the sounds.

Finger Tip Recording

logging_finger_position

Before defining rules for a new gesture detection I often need to carefully observe the data recorded by the leap motion sensor. Here I’m trying to find out what it needs to define the known «OK-gesture». I especially need to screen the finger tip position of the thumb and the index finger.

Testing a dialogue between object and user

Video Documentation: Object – Wizard of Oz from Jones Merc on Vimeo.

A nonfunctional prototype (object) in a «Wizard of Oz»-test-setup. A nearby computer allows to write the text which is subsequently displayed on the object’s screen. Without having to program a smart object with a fully functioning gesture recognition one is able to test different scenarios like this dialogue between user and object. Focus of the dialogue is how to slowly establish a gesture language without presenting it in the first place to the user but rather developing it in a dialogue between the two.