Interaction System: 1-2

1-2-Managing-and-Entertaining

1-2 Managing and Entertaining
The output of a self-regulating system becomes input for a learning system. If the output of the learning system also becomes input for the selfregulating system, two cases arise. The first case is managing automatic systems, for example, a person setting the heading of an autopilot—or the speed of a steam engine. The second variation is a computer running an application, which seeks to maintain a relationship with its user. Often the application’s goal is to keep users engaged, for example, increasing difficulty as player skill increases or introducing surprises as activity falls, provoking renewed activity. This type of interaction is entertaining—maintaining the engagement of a learning system. If 1-2 or 2-1 is open loop, the interaction may be seen as essentially the same as the open-loop case of 0-2, which may be reduced to 0-0.

Source: Dubberly Hugh, Pangaro, Haque. «What is Interaction?
Are There Different Types?»
. 2009. ACM 1072-5220/09/0100

Testing a dialogue between object and user

Video Documentation: Object – Wizard of Oz from Jones Merc on Vimeo.

A nonfunctional prototype (object) in a «Wizard of Oz»-test-setup. A nearby computer allows to write the text which is subsequently displayed on the object’s screen. Without having to program a smart object with a fully functioning gesture recognition one is able to test different scenarios like this dialogue between user and object. Focus of the dialogue is how to slowly establish a gesture language without presenting it in the first place to the user but rather developing it in a dialogue between the two.

Object Dialogue

Object-Wizard

The object above is a prototype for a smart computer, that is enabled to regulate things for the user. The user can interact with it via gestures and for simplicity the object has a display to «speak».
This object allows to test different scenarios with the «Wizard of Oz Method». The object just displays text which I am writing at a nearby computer. I can thereby present the object to a possible user and let him interact with it, controlling the objects feedback by myself.

Teach Leap New Gestures

The framework LeapTrainer.js allows to teach a programme new gestures. In the video it seems that it works nicely, in reality it does not work that well. Improving the learning mechanism would be crucial to use it in my project.

The corresponding code can be found on github.

What’s your Gesture?

Video Research: What’s your Gesture from Jones Merc on Vimeo.

To see if gesture trends can be found I asked multiple people to perform a gesture to each of eleven terms. They were asked to only use one hand and did not have time to think about the gesture in beforehand. I found that in some cases the results were pretty predictable and most of the gestures were alike. Other terms provoked a high diversity of different gestures and sometimes very creative gestures were used.

Another finding was that the interface that would be controlled with such gestures would directly influence the gesture itself. A lot of people asked how the object to control would look like and said that they may would have come up with different gestures if the’d seen the object or the interface respectively.

To see the full length gesture video go here.