Video Documentation: User Tests with Music Player

user testing music player teaser image
Another set of user tests was conducted. The music player and the helping assistant was tested this time. Again a lot of valuable feedback came together. Here a few to name:

  • The sound design still needs some tweaking, because sometimes the sound was not interpreted as an interface sound but rather as a part of the music. A clear distinction would be desirable.
  • Music choice still needs to be redefined. Electronic music is for exemplary representation a bad choice because of the similarity with the interface sounds.
  • For some people the assistant should maybe give more feedback. Perhaps even visual feedback.
  • The gesture recognition/tracking may be improved for a better experience. Sometimes – especially under bad light circumstances, the missing reliability can be very annoying.

User Tests with Radio Prototype

user test radio prototype teaser image

And at some point he (the programmed assistant) asks questions like «Did you know that other gestures exist?» That’s where I would like to answer, but no answer is expected by the machine. … That’s also confusing.

During the first complete user test with the music player a lot of interesting feedback came together. Beginning with things, which seem quite easy to resolve like the point above — the solution would be not to ask any questions, if the user can’t answer via gesture — going over to other simple statements («The music was too loud») and ending with more complex things like the question if and how much visual feedback is required to generate a pleasant user experience.

At the moment visual feedback is non-existent but substituted by acoustic feedback. Sounds for swiping, changing the volume and switching it on and off are provided. Still they are much more abstract, because the user first has to link a sound to a gesture or to an action respectively. Paired with faulty behaviour of the leap motion tracking device this leads to a lot of frustration. Some of it maybe can be replaced by redesign the assistant and it’s hints. (Maybe even warnings that the tracking is not 100% accurate).

Further user testing will give more insight if and how much the assistant should intervene.
Also, a deeper analysis of the video recordings taken from the test, will help improving the user experience.

Further notations:

  • Text display sometimes too fast
  • Sounds not distinguishable from music
  • Swipe is the clearest sound
  • Not clear why something was triggered
  • Inaccuracy (maybe light situation was not perfect for leaps tracking)
  • Assistant mostly taught the gestures correctly, sometimes the would not trigger due to technical constraints
  • On/Off gesture was not found by chance (in comparison with the lamp where almost all users found the exact same gesture to switch it on or off)

Video Documentation: User Tests

Video Documentation: User Tests from Jones Merc on Vimeo.

User tests showed that users direct their gestures mostly towards the object, they want to control. A direct dialogue between the user and the object is obvious.
It also showed some interesting insights about how they interact with the gesture lamp. All of them found one of the On/Off gestures pretty quickly, but were puzzled if they came across the other which rather confused them instead of helping them.
Another interesting thing said, was that if a user is controlling sound, the source is less evident than a light (lamp) or a wind (fan) source. Sound/Music is rather surrounding us and therefore a gesture may not be directed that clearly to the object (music-player) itself.
The tests sure make me rethink some interaction steps and help to develop a smoother interaction flow.

Testing a dialogue between object and user

Video Documentation: Object – Wizard of Oz from Jones Merc on Vimeo.

A nonfunctional prototype (object) in a «Wizard of Oz»-test-setup. A nearby computer allows to write the text which is subsequently displayed on the object’s screen. Without having to program a smart object with a fully functioning gesture recognition one is able to test different scenarios like this dialogue between user and object. Focus of the dialogue is how to slowly establish a gesture language without presenting it in the first place to the user but rather developing it in a dialogue between the two.