And at some point he (the programmed assistant) asks questions like «Did you know that other gestures exist?» That’s where I would like to answer, but no answer is expected by the machine. … That’s also confusing.
During the first complete user test with the music player a lot of interesting feedback came together. Beginning with things, which seem quite easy to resolve like the point above — the solution would be not to ask any questions, if the user can’t answer via gesture — going over to other simple statements («The music was too loud») and ending with more complex things like the question if and how much visual feedback is required to generate a pleasant user experience.
At the moment visual feedback is non-existent but substituted by acoustic feedback. Sounds for swiping, changing the volume and switching it on and off are provided. Still they are much more abstract, because the user first has to link a sound to a gesture or to an action respectively. Paired with faulty behaviour of the leap motion tracking device this leads to a lot of frustration. Some of it maybe can be replaced by redesign the assistant and it’s hints. (Maybe even warnings that the tracking is not 100% accurate).
Further user testing will give more insight if and how much the assistant should intervene.
Also, a deeper analysis of the video recordings taken from the test, will help improving the user experience.
Further notations:
- Text display sometimes too fast
- Sounds not distinguishable from music
- Swipe is the clearest sound
- Not clear why something was triggered
- Inaccuracy (maybe light situation was not perfect for leaps tracking)
- Assistant mostly taught the gestures correctly, sometimes the would not trigger due to technical constraints
- On/Off gesture was not found by chance (in comparison with the lamp where almost all users found the exact same gesture to switch it on or off)