Video Documentation: Radio with Assistant

Video Documentation: Radio with Assistant from Jones Merc on Vimeo.

Although I removed the assistant from my final project outcome, I still wanted to have a proper documentation of it. In the video above, you can see one possible user flow when interacting with the radio. The assistant helps you find new gestures. The assistant checks the users progress and brings up new hints about still hidden features or gestures respectively. Of course this was a very neat sequence and few users would have found the «OK-gesture» immediately. But even in those cases the assistant would answer accordingly.

The Conversational UI

simon stalenhag robot picture
(Image: ©Simon Stålenhag, http://www.simonstalenhag.se/bilderbig/peripheral2_1920.jpg)

I was just made aware of this interesting article about user interfaces. Or rather about the lack of any graphical user interface. The article states that we may soon encounter a lot more robots (or virtual assistant or invisible apps – however you like to call them) in our digital «existence».

The article is mainly about bots literally «speaking» to you, but as the author himself states in the comments-section; voice operation almost takes up 100% of our attention, whereas text interfaces can be handled in fragments and take less cognitive load. And that’s exactly what I tried to achieve with my assistant.

Another very important thing is:

… so picking the right thing to say, and the tone of your dialogue with the user, is crucial

And that’s exactly what I was struggling with for the last few weeks. And finally, I even decided to cross out the assistant completely. In my case there is a little less need for it, because I have the acoustic feedback layer as well. But nevertheless this statement is extremely important if you think about future interactions with bots.

The article gives some advice on how the first encounter with the assistant should be like.

Your first contact with the user should be to introduce yourself. Remember, you’re in a chat. You only get one or two lines, so keep it short and to the point.

Other aspects were highlighted as well. For example the difference between a GUI and a robot in terms of discoverability. If you have an icon, you can hover or click it and you’ll soon know what it does. If you are in a conversation with a bot, you have no idea what he can do, and what not.

It’s the robot’s job to seize every opportunity to suggest the next step and highlight less-familiar features.

My assistant did the same. He introduced next steps depending on the users skills and tried to bring up new and less-familiar features.

This article again proves to me that my topic could become – or already is — very important if you’re thinking about future interactions with machines. In whatever way that may be – via voice control or with a text interface.

Find the article here.

Further links to the same topic:

  • Twine, tool for telling interactive, non-linear stories. (Very similar to my «flows»).
  • Wit, building bots made easy.
  • Beep Boop, hosting platform for bots.
  • BotKit, Building Blocks for Building Bots.