Video Documentation: Radio with Assistant

Video Documentation: Radio with Assistant from Jones Merc on Vimeo.

Although I removed the assistant from my final project outcome, I still wanted to have a proper documentation of it. In the video above, you can see one possible user flow when interacting with the radio. The assistant helps you find new gestures. The assistant checks the users progress and brings up new hints about still hidden features or gestures respectively. Of course this was a very neat sequence and few users would have found the «OK-gesture» immediately. But even in those cases the assistant would answer accordingly.

The Conversational UI

simon stalenhag robot picture
(Image: ©Simon Stålenhag, http://www.simonstalenhag.se/bilderbig/peripheral2_1920.jpg)

I was just made aware of this interesting article about user interfaces. Or rather about the lack of any graphical user interface. The article states that we may soon encounter a lot more robots (or virtual assistant or invisible apps – however you like to call them) in our digital «existence».

The article is mainly about bots literally «speaking» to you, but as the author himself states in the comments-section; voice operation almost takes up 100% of our attention, whereas text interfaces can be handled in fragments and take less cognitive load. And that’s exactly what I tried to achieve with my assistant.

Another very important thing is:

… so picking the right thing to say, and the tone of your dialogue with the user, is crucial

And that’s exactly what I was struggling with for the last few weeks. And finally, I even decided to cross out the assistant completely. In my case there is a little less need for it, because I have the acoustic feedback layer as well. But nevertheless this statement is extremely important if you think about future interactions with bots.

The article gives some advice on how the first encounter with the assistant should be like.

Your first contact with the user should be to introduce yourself. Remember, you’re in a chat. You only get one or two lines, so keep it short and to the point.

Other aspects were highlighted as well. For example the difference between a GUI and a robot in terms of discoverability. If you have an icon, you can hover or click it and you’ll soon know what it does. If you are in a conversation with a bot, you have no idea what he can do, and what not.

It’s the robot’s job to seize every opportunity to suggest the next step and highlight less-familiar features.

My assistant did the same. He introduced next steps depending on the users skills and tried to bring up new and less-familiar features.

This article again proves to me that my topic could become – or already is — very important if you’re thinking about future interactions with machines. In whatever way that may be – via voice control or with a text interface.

Find the article here.

Further links to the same topic:

  • Twine, tool for telling interactive, non-linear stories. (Very similar to my «flows»).
  • Wit, building bots made easy.
  • Beep Boop, hosting platform for bots.
  • BotKit, Building Blocks for Building Bots.

Yellow Lamp Waits to be Painted White

yellow raw lamp soon painted white

Instead of buying another lamp, I decided to stick with the yellow one. It’s true that its look may not be as reduced as the radio’s, but after looking for an affordable lamp alternative, I must say, that finding a very good fit, is probably impossible. Designing my own lamp won’t be possible due to time restraints and so I hope that painting the lamp with the exact same white color as the other objects and taking away some smaller parts will make it clear that those three objects belong to the same object-family.

One Leap Motion Setup

Technological setup with one leap Motion

Sketch of how the exhibition setup could look like (only schematic) if I only use one Leap Motion. The iMac would process the incoming data from the Leap device and would display some visual feedback on one projector.

It would also play the interface sounds directly (usb audio interface would be needed for that, because headphones will also be attached to the iMac playing the sound of the concept video).

For the text assistant and the music sound, it would send shiftr commands to the MacBook, which plays back the music. At the same time it will display the different assistants of the three objects on the second projector.

Multi-Object Assistant Layout

Multi object assistant chat layout

Transparency in Multi object chat layout

In a setting of three objects and three assistants (text-aid for understanding the gestures) and only one space to display the text, I need a layout which makes clear which object is «currently» speaking.

The first image above depicts such a layout. The question is, if the subtle color differences and the label («Radio», «Lamp», «Ventilator») is enough to distinguish the different assistants. And what if one would like to read the comment of the lamp, which may already left the screen because of two quick messages of the radio?

Maybe the second picture makes it more clear, which object is currently active.

In a setup where object selection has to take place (with only one leap motion) (see Prototyping Object Selection) it would also be imaginable that only the chat of the currently active object is displayed.

An animation between the different object chats would be necessary and could maybe look like this:

multi object chat switch animation