Multi-Object Assistant Layout

Multi object assistant chat layout

Transparency in Multi object chat layout

In a setting of three objects and three assistants (text-aid for understanding the gestures) and only one space to display the text, I need a layout which makes clear which object is «currently» speaking.

The first image above depicts such a layout. The question is, if the subtle color differences and the label («Radio», «Lamp», «Ventilator») is enough to distinguish the different assistants. And what if one would like to read the comment of the lamp, which may already left the screen because of two quick messages of the radio?

Maybe the second picture makes it more clear, which object is currently active.

In a setup where object selection has to take place (with only one leap motion) (see Prototyping Object Selection) it would also be imaginable that only the chat of the currently active object is displayed.

An animation between the different object chats would be necessary and could maybe look like this:

multi object chat switch animation

Prototyping Object Selection

Video Documentation: Prototyping Object Selection from Jones Merc on Vimeo.

Yesterday the thought to use only one Leap Motion came up in a discussion. There are some advantages but also some downsides linked to such a setup.
A plus could be seen in the fact, that the technological mess behind three leaps (three computers, sending sound and text commands back and forth via shiftr between those computers) would definitely decrease. Another big advantage is, that I would have the possibility to display simple visual feedback to the user about the state and the tracking of his hand. This could help communicating that even slight finger movements are tracked and not only big arm movements.
(With three Leaps and three computers – of which only one is attached to a beamer — it would be practically impossible to display the finger movement in realtime, because all the tracking information would have to be sent to the «beamer-computer» and interpreted there. If I only had one leap, I could display the visual feedback all the time.

One big disadvantage would be that one is only able to control one object at a time. Before manipulating the light of the lamp the lamp has to be «selected» somehow. While discussing this matter, the best solution to select an object seemed to be to point at the object. This pointing/selecting would only be possible at a certain height. The hand has to have enough distance from the Leap device. Lowering the hand will «dive into» this object and allow to control only that one.
Unfortunately some gestures could be at the borderline to the selection area: when changing the volume of a song the z-axis position of the hand represents the volume. But if one turns up the volume very much, suddenly the hand will enter the «object selection heigth» and automatically switch the object.
This behaviour can be very well seen in the second part of the video above.

Otherwise the video proves that an object selection could be doable. By moving the hand into the object’s direction the object is selected.
In a further elaboration of this new idea, one could imagine that selection an object would map a projected border around the object (See image below).

berg london light mapping
(Berg London, 2012, https://vimeo.com/23983874)

Sound Design: New sounds

I redesigned the sounds for the interface. So far swiping, entering and leaving the interaction box and the volume click sound have changed. Two new sounds joined the family; docking on and off from the volume adjustment gesture.
All sounds have now common sounds-fragments included, to make it clear, that all those sounds belong to the sound interface.

Increasing Complexity of Interaction Flow

interaction flow with increased complexity

Writing the interaction flow for the music player offering gestures for play, pause, track-change and volume-adjustment is already a lot more complex than the one I’ve written for a lamp with a simple on and off switch.
The «smarter» the device the more complex the multilinear story flow. Imagining this with a much more elaborated product seems almost crazy, or let’s one think that artificial intelligence definitely will be a must if products ever want to appear really «smart».

Video Documentation: User Tests

Video Documentation: User Tests from Jones Merc on Vimeo.

User tests showed that users direct their gestures mostly towards the object, they want to control. A direct dialogue between the user and the object is obvious.
It also showed some interesting insights about how they interact with the gesture lamp. All of them found one of the On/Off gestures pretty quickly, but were puzzled if they came across the other which rather confused them instead of helping them.
Another interesting thing said, was that if a user is controlling sound, the source is less evident than a light (lamp) or a wind (fan) source. Sound/Music is rather surrounding us and therefore a gesture may not be directed that clearly to the object (music-player) itself.
The tests sure make me rethink some interaction steps and help to develop a smoother interaction flow.