Degree Show

Gestio_Exhibition_03

Gestio_Exhibition_04

Gestio_Exhibition

Gestio_Exhibition_02

The final exhibition setup at the ZHdK.

Degree Show Preparation

radio fixation

black exhibition table

leap holes

box container macbooks

A few pictures of the ongoing preparation for the exhibition. The radio needs a fixation, so that nobody steals the object. A screw underneath will fix it on the black table in the next picture. After laser cutting four MDFs to provide just enough room for the three leap motions, I cut bigger holes into the board for the cables, which should remain hidden. A difficult part was covering the MDF with the black foil without getting any air trapped underneath.

To hide the MacBooks and prevent stealing, I built a small wooden box, which will contain the computers and some cables. Of course it had to be painted white just like the rest of the exhibition.

Shiftr Connections

Above you can see the current shiftr connections. All connected Arduinos, the webpage for remote controlling and the topics (currently two: lamp and ventilator).

Video Shoot

concept video shoot

Finally after 4 hours of shooting, all shots are recorded. Thanks to my actor Stefan! Another big thanks goes out to my man Dave for the support (and for the shot above).

Ventilator 2.0

ventilator shooting

ventilator on black

Late night shoot of the final ventilator.
After redesigning the stand for the ventilator I also painted the blades white and exchanged the black cable with a white one. Result above.

Lamp Reborn

white lamp photo shoot

Product Shot white Lamp

Taking some pictures of the white painted and reassembled lamp.

Ventilator Gets a New Stand

ventilator wooden stand

ventilator on wooden stand

To bring the ventilator and the radio closer together (look), I started to work out a new stand for the ventilator. It has the same rounded edges as the radio (comparison here) does and will soon also be painted white.

Video Documentation: Radio with Assistant

Video Documentation: Radio with Assistant from Jones Merc on Vimeo.

Although I removed the assistant from my final project outcome, I still wanted to have a proper documentation of it. In the video above, you can see one possible user flow when interacting with the radio. The assistant helps you find new gestures. The assistant checks the users progress and brings up new hints about still hidden features or gestures respectively. Of course this was a very neat sequence and few users would have found the «OK-gesture» immediately. But even in those cases the assistant would answer accordingly.

The Conversational UI

simon stalenhag robot picture
(Image: ©Simon Stålenhag, http://www.simonstalenhag.se/bilderbig/peripheral2_1920.jpg)

I was just made aware of this interestingarticleabout user interfaces. Or rather about the lack of any graphical user interface. The article states that we may soon encounter a lot more robots (or virtual assistant or invisible apps – however you like to call them) in our digital «existence».

The article is mainly about bots literally «speaking» to you, but as the author himself states in the comments-section; voice operation almost takes up 100% of our attention, whereas text interfaces can be handled in fragments and take less cognitive load. And that’s exactly what I tried to achieve with my assistant.

Another very important thing is:

… so picking the right thing to say, and the tone of your dialogue with the user, is crucial

And that’s exactly what I was struggling with for the last few weeks. And finally, I even decided to cross out the assistant completely. In my case there is a little less need for it, because I have the acoustic feedback layer as well. But nevertheless this statement is extremely important if you think about future interactions with bots.

The article gives some advice on how the first encounter with the assistant should be like.

Your first contact with the user should be to introduce yourself. Remember, you’re in a chat. You only get one or two lines, so keep it short and to the point.

Other aspects were highlighted as well. For example the difference between a GUI and a robot in terms of discoverability. If you have an icon, you can hover or click it and you’ll soon know what it does. If you are in a conversation with a bot, you have no idea what he can do, and what not.

It’s the robot’s job to seize every opportunity to suggest the next step and highlight less-familiar features.

My assistant did the same. He introduced next steps depending on the users skills and tried to bring up new and less-familiar features.

This article again proves to me that my topic could become – or already is — very important if you’re thinking about future interactions with machines. In whatever way that may be – via voice control or with a text interface.

Find the articlehere.

Further links to the same topic:

  • Twine, tool for telling interactive, non-linear stories. (Very similar to my «flows»).
  • Wit, building bots made easy.
  • Beep Boop, hosting platform for bots.
  • BotKit, Building Blocks for Building Bots.

Yellow Lamp Waits to be Painted White

yellow raw lamp soon painted white

Instead of buying another lamp, I decided to stick with the yellow one. It’s true that its look may not be as reduced as the radio’s, but after looking for an affordable lamp alternative, I must say, that finding a very good fit, is probably impossible. Designing my own lamp won’t be possible due to time restraints and so I hope that painting the lamp with the exact same white color as the other objects and taking away some smaller parts will make it clear that those three objects belong to the same object-family.

One Leap Motion Setup

Technological setup with one leap Motion

Sketch of how the exhibition setup could look like (only schematic) if I only use one Leap Motion. The iMac would process the incoming data from the Leap device and would display some visual feedback on one projector.

It would also play the interface sounds directly (usb audio interface would be needed for that, because headphones will also be attached to the iMac playing the sound of the concept video).

For the text assistant and the music sound, it would send shiftr commands to the MacBook, which plays back the music. At the same time it will display the different assistants of the three objects on the second projector.

Multi-Object Assistant Layout

Multi object assistant chat layout

Transparency in Multi object chat layout

In a setting of three objects and three assistants (text-aid for understanding the gestures) and only one space to display the text, I need a layout which makes clear which object is «currently» speaking.

The first image above depicts such a layout. The question is, if the subtle color differences and the label («Radio», «Lamp», «Ventilator») is enough to distinguish the different assistants. And what if one would like to read the comment of the lamp, which may already left the screen because of two quick messages of the radio?

Maybe the second picture makes it more clear, which object is currently active.

In a setup where object selection has to take place (with only one leap motion) (seePrototyping Object Selection) it would also be imaginable that only the chat of the currently active object is displayed.

An animation between the different object chats would be necessary and could maybe look like this:

multi object chat switch animation

Prototyping Object Selection

Video Documentation: Prototyping Object Selection from Jones Merc on Vimeo.

Yesterday the thought to use only one Leap Motion came up in a discussion. There are some advantages but also some downsides linked to such a setup.
A plus could be seen in the fact, that the technological mess behind three leaps (three computers, sending sound and text commands back and forth via shiftr between those computers) would definitely decrease. Another big advantage is, that I would have the possibility to display simple visual feedback to the user about the state and the tracking of his hand. This could help communicating that even slight finger movements are tracked and not only big arm movements.
(With three Leaps and three computers – of which only one is attached to a beamer — it would be practically impossible to display the finger movement in realtime, because all the tracking information would have to be sent to the «beamer-computer» and interpreted there. If I only had one leap, I could display the visual feedback all the time.

One big disadvantage would be that one is only able to control one object at a time. Before manipulating the light of the lamp the lamp has to be «selected» somehow. While discussing this matter, the best solution to select an object seemed to be to point at the object. This pointing/selecting would only be possible at a certain height. The hand has to have enough distance from the Leap device. Lowering the hand will «dive into» this object and allow to control only that one.
Unfortunately some gestures could be at the borderline to the selection area: when changing the volume of a song the z-axis position of the hand represents the volume. But if one turns up the volume very much, suddenly the hand will enter the «object selection heigth» and automatically switch the object.
This behaviour can be very well seen in the second part of the video above.

Otherwise the video proves that an object selection could be doable. By moving the hand into the object’s direction the object is selected.
In a further elaboration of this new idea, one could imagine that selection an object would map a projected border around the object (See image below).

berg london light mapping
(Berg London, 2012, https://vimeo.com/23983874)

Video Documentation: Radio User Tests

Video Documentation: Radio User Tests from Jones Merc on Vimeo.

And finally some video material from the last user tests. Main focus lies on showing the problem with the volume adjustment (imaginary rotary knob). The video shows a part of a discussion where another proposal for the volume adjustment was brought up. It also shows a first implementation in use.

Another Set of User Testing

user_test_radio

And another user testing with the music player was conducted. This time with well known songs, mostly based on guitar sounds, to avoid interference with the user interface sounds. The sounds weren’t confusing anymore and all the functions were in the end found by the user (with the help of the assistant). Several new findings were noted, the most important is probably about the volume changing gesture. A discussion after the testing led to new approaches.

volume change via rotary knob

Until now I had a gesture implemented, where the user had to imitate grabbing a rotary knob and by rotating the hand either to the right or to the left he could adjust the volume.
Problem:Tests showed, that this gesture is really difficult to perform (on the one hand because it is not really ergonomic and on the other hand because it is a very distinct hand position, which is difficult to track with leap motion).

Together with users new gesture propositions were considered. Here some of the best approaches.

volume adjust - open hand z-axis

Like dimming the lamp, one could just use the distance from the leap position to adjust the volume. The higher the hand, the louder the song. This is – according to user feedback – also well understood, because in orchestras a conductor is also indicating an increase or decrease of the volume by raising or lowering his hand. I already added this control to the prototype.
Problem:The volume is adjusted all the time. If you are about to change tracks (swiping) or if you are performing gestures to communicate with the assistant, you are still adjusting the volume. So if you swipe right and raise your hand a bit during that gesture, you will increase the volume a bit. This is an unwanted fact, which one could overcome by a pre-gesture. Let’s assume one has to hold the open hand in the same position for 2s and only after that one is able to adjust the volume, one could exclude a lot of unwanted behaviour already. Tests will tell if this is true or not.


Edited:
After having programmed half of the described gesture above, I realized that one has to «escape» this gesture again. Wainting 2 sec and adjusting the volume thereafter looks all good, but what if I’m content with this gesture and want to escape the volume changing mode? First solutions that came to mind:

  • Performing any other gesture will escape
  • Bending any finger would stop it (to make sure that Leap Motion is not tracking one finger falsly, I may check if two fingers are bent
  • Taking the hand out of the interaction box (field where hand is tracked) will escape
  • Moving the hand sideways more than a certain amount will escape as well

All of those ideas together may solve that problem of escaping the volume for a big fraction of the users. But again testing will show if that’s the case.


volume adjust hand tilting

Another approach, which was discussed with test-participants, includes a gesture where one tilts the open hand either to the right or to the left. As long the hand is tilted to the right the volume is incrementally turned up. Going back to a horizontal position of the hand (neutral position) does not influence volume and tilting to the left will turn down the music. But some weaknesses were detected as well.
Problem:Again, when performing other gestures, it is very likely that a tilted hand is detected. When swiping left or right the hand is often held in a vertical position. But as well by implementing a certain time to get into the volume-change mode could help to get a less interfering gesture. If one has to do that anyway though, I assume that the high/low is more intuitive.

volume_fist_z_axis

Furthermore I thought about a certain hand position, which has to be performed, to adjust the volume. For example performing a fist will trigger the volume adjustment mode and the higher the fist, the louder the music and vice versa.
Problem:Unfortunately making a fist is also responsible for pausing a track. So always when you would like to adjust the volume you are pausing the current track. This idea is therefore not practicable.

Next hurdle overcome

leap_test_gallery

leap_test_successful

Testing the light situation of the exhibition itself showed good tracking results of the leap motion. No infrared light interference. Happy day!

Disassembling the ventilator

disassembling ventilator object hacking

On the one hand I need to change the look of the ventilator a bit and on the other hand I need to hide the button, as the ventilator will be switched on and off via gestures and not via buttons. So I had to disassemble the ventilator.

Documenting the Radio Object

documenting the music player object

Documenting a project is important. Doing just that.

Sound Design: New sounds

I redesigned the sounds for the interface. So far swiping, entering and leaving the interaction box and the volume click sound have changed. Two new sounds joined the family; docking on and off from the volume adjustment gesture.
All sounds have now common sounds-fragments included, to make it clear, that all those sounds belong to the sound interface.

Arduino Shield for Dimming Light

arduino_led_dim_shield_self_made_overview

arduino_led_dim_shield_self_made_detail

After having assembled the electronics on abreadboardI decided to solder a shield for my Arduino Yun. The power supply delivers 5V which is exactly the power the arduino needs to run. That’s why I could feed power into theVinpin. In the end I only need to plug it in, it will connect to the internet and receive incoming messages from shiftr.io.

New Radio

radio model wood new shape

radio model wood new shape

After revising the shape of the radio, I came up with a more reduced and simpler appearance. The current progress is visible in the pictures above. A wooden block as a core and only few controls attached to it. To make the speaker covering, I had to come up with a clever idea how to generate a dot-pattern in illustrator.

circle_circle_dot_dot_illustrator_script_result

This scriptemerged from this. Check it out.

Video Documentation: User Tests with Music Player

user testing music player teaser image
Another set of user tests was conducted. The music player and the helping assistant was tested this time. Again a lot of valuable feedback came together. Here a few to name:

  • The sound design still needs some tweaking, because sometimes the sound was not interpreted as an interface sound but rather as a part of the music. A clear distinction would be desirable.
  • Music choice still needs to be redefined. Electronic music is for exemplary representation a bad choice because of the similarity with the interface sounds.
  • For some people the assistant should maybe give more feedback. Perhaps even visual feedback.
  • The gesture recognition/tracking may be improved for a better experience. Sometimes – especially under bad light circumstances, the missing reliability can be very annoying.

Leap and Infrared Light

leap motion troubleshooting infrared light

After realising that my Leap Motion sensor is not tracking with the same accuracy every time. After some research I found out that infrared light sources highly influence the tracking result of the Leap, because it does emit infrared light as well for a good tracking (also in dark circumstances). The built-in trouble shooting assistant shows if not everything is as it should be. At my current working station I have good conditions, but will I find the same in the exhibition setting. This is what I have to find out.

One good thing is, that the LEDs of my lamp object don’t seem to have any influence on the tracking result.

Video Documentation: Dimmable Lamp

Video Documentation: High Power LED dimming via Web from Jones Merc on Vimeo.

Using shiftr.io, an Arduino Yun and any sort of publishing device to shiftr (in this case a webpage publishes commands to increase or decrease LED brightness) allows me to control a lamp. This means I can also use gestures to trigger the publishing of those commands.

Soldering High Power LEDs

rebel_high_power_led

To get a dimmable lamp I will switch to LED instead of common light bulbs. Three high power LEDs should emit enough light to imitate a normal light bulb. I will place the LEDs in the same lamp shade as the one I was using in the first prototype with the relay. Now that I don’t need the relay with the lamp anymore, I will reuse the relay with the ventilator.

User Tests with Radio Prototype

user test radio prototype teaser image

And at some point he (the programmed assistant) asks questions like «Did you know that other gestures exist?» That’s where I would like to answer, but no answer is expected by the machine. … That’s also confusing.

During the first complete user test with the music player a lot of interesting feedback came together. Beginning with things, which seem quite easy to resolve like the point above — the solution would be not to ask any questions, if the user can’t answer via gesture — going over to other simple statements («The music was too loud») and ending with more complex things like the question if and how much visual feedback is required to generate a pleasant user experience.

At the moment visual feedback is non-existent but substituted by acoustic feedback. Sounds for swiping, changing the volume and switching it on and off are provided. Still they are much more abstract, because the user first has to link a sound to a gesture or to an action respectively. Paired with faulty behaviour of the leap motion tracking device this leads to a lot of frustration. Some of it maybe can be replaced by redesign the assistant and it’s hints. (Maybe even warnings that the tracking is not 100% accurate).

Further user testing will give more insight if and how much the assistant should intervene.
Also, a deeper analysis of the video recordings taken from the test, will help improving the user experience.

Further notations:

  • Text display sometimes too fast
  • Sounds not distinguishable from music
  • Swipe is the clearest sound
  • Not clear why something was triggered
  • Inaccuracy (maybe light situation was not perfect for leaps tracking)
  • Assistant mostly taught the gestures correctly, sometimes the would not trigger due to technical constraints
  • On/Off gesture was not found by chance (in comparison with the lamp where almost all users found the exact same gesture to switch it on or off)

Radio Lasercut Frame

radio_lasercut_outside

radio_lasercut_inside_detail

To get a roundish — almost cute — shape, I lasercut slices to achieve a very organic form. I intend to stick them together and sand the edges down until I obtain a smooth surface. The shape will then stand on for small posts. The look will be similar to the one of thesmall foam model.

Performance Problems

chrome timeline for performance profiling

After realising that I was facing performance problems in the browser, I checked chrome’s timeline to see where bottlenecks occured. Biggest factor was caused by «painting» the new characters on the browsers canvas, just like you can see itherein this early prototype.
I am not sure yet how to solve this problem, but for now I disabled the typewriter function.
Another bottleneck could be the scripting including all the calculations made in the gesture checker javascript file. Maybe I will have to throttle some tasks to improve performance again.

Controlling an LED via Arduino’s PWM

schematic drawing sketch

This schematic shows a setup to control (dim) an LED via Arduino. An external power supply is controlled with a transistor, which is dependent on the input of the arduino.Pulse width modulation (PWM)allows to downregulate the voltage of the power supply to the voltage required by the LED.

I need this to dim the lamp via gestures. I will therefore map a certain value (e.g. Y-axis value) to the brightness of the LED.

Radio Miniature Model

radio Model prototype Miniature form finding industrial design

The shape for the radio object should be like the one in the model. Or at least similar. Very few controls, to not confuse the user about functionality (few controls = few gestures). I’d like the shape to be roundish, a little like the colored iMacs. The object should not feel too technical, because there is already a big step between the current radios and gestured controlled radios – which is based on technical changes. By making the object rounder I intend to increase the feel of talking to something «smart», something alive.

Sound Design: First Sounds

After playing around with the recordings and a lot of try outs in garageband (music instruments) and searching a sound database I came up with the sounds above.

  • Cancel: For the cancelling gesture (waving your hand saying no/stop)
  • Hover In: When entering the interaction field of the Leap
  • Hover Out: When leaving the interaction field of the Leap
  • Swipe Left: Selecting the next track
  • Swipe Right: Selecting the previous track
  • Thumb Up/Ok: Saying «Yes»/«OK» to the smart device during the dialogue
  • Volume Adjust: As long as adjusting the volume with a gesture grasping an invisible rotary knob

Sound Design First Recordings

First raw recordings (except normalizing and very basic noise reduction). Sounds mainly for «next-Track»- and «previous-Track»-gestures or for the sound when a hand enters Leap Motion’s interaction field.
Combination with instruments and manipulation of the sounds will be necessary.

Increasing Complexity of Interaction Flow

interaction flow with increased complexity

Writing the interaction flow for the music player offering gestures for play, pause, track-change and volume-adjustment is already a lot more complex than the one I’ve written for a lamp with a simple on and off switch.
The «smarter» the device the more complex the multilinear story flow. Imagining this with a much more elaborated product seems almost crazy, or let’s one think that artificial intelligence definitely will be a must if products ever want to appear really «smart».

Leap Motion Review

Leap Motion Quality Product Shot High Resolution Gesture Tracking Device

After having worked with Leap Motion for almost 3 months so far, I can say that it indeed does some awesome work tracking hands and fingers. The technology seems to have advanced that far that such products may be used in commercial applications very soon. Although I also have to state that when programming more complex gestures than the ones, which come ready-made with the Leap SDK (swipes, air-taps and circles), it gets very difficult to track them accurately. On the one hand because gestures naturally interfere (forming the hand to a «thumb-up»-gesture will always resemble the gesture of clenching a hand into a fist for example.

This combined with the fact that sometimes the leap sensor misinterprets the position of fingers (e.g. index finger is extended, the Leap Motion says otherwise) makes it even more difficult to get a more or less reliable tracking.

But wouldn’t it be boring if everything would run plain smoothly?

Black Ventilator

black ventilator

This black ventilator is waiting to be painted white to serve as an artefact in the exhibition at the ZHdK. In a few weeks one can control this and other things via gestures.

3 Leap/3 Computer Setup

3-Leap-Setup

Because I can only use one leap per computer, I need to setup a rather complex linkage. I will use an iMac and two MacBooks and attach one leap motion to each of them. It gets even more complex because different sounds need to played. The iMac will at the same time show the concept video and therefore the sound of the video will be played via headphones. One MacBook will process the leap input associated with the music player (radio). So the headphones attached to one MacBook will play the Music itself.
This leaves one remaining MacBook to play the interface sounds via some speaker.
To play the interface sounds of the leaps connected with the other two computers, I will probably useshiftrto send play commands to the only computer which will playback the sounds.

Sound Design for Interface

Acoustic feedback could enhance the gesture based interface experience. Without haptic and only little visual feedback a hearable hint could help the users to understand what effect their actions will produce. Acoustic feedback could indicate if the user is holding his hands in the right position and could also help to find the «correct» gesture.

A few attributes I am looking for in my interface sounds:
– pleasant
– a bit mechanical
– unobtrusive
– discreet
– confirmative
– not squeeky
– not too playful

An imaginable sound for a lamp to switch on could maybe sound like this:

Problem with palmVelocity

log of direction change [x, y, z]

The Leap Motion SDK offers a method calledhand.palmVelocity. Unfortunately this does not seem to behave like I would expect it to, if measured during fast movements. For a quick hand shake (indicating a cancelling gesture) I may use the hands palm position to get a more reliable tracking. The picture above shows a log and logs direction changes depending on the x-coordinates ([x, y, z]). Again a good understanding of how the leap is registering hand movements is crucial for a successful gesture implementation.

Finger Tip Recording

logging_finger_position

Before defining rules for a new gesture detection I often need to carefully observe the data recorded by the leap motion sensor. Here I’m trying to find out what it needs to define the known«OK-gesture». I especially need to screen the finger tip position of the thumb and the index finger.

Multiple Leap Motions not Supported

multiple leap motion request

Unfortunatelyone cannot attach more than one Leap Motionto one computer, although the feature is highly requested by a lot of developers. Attaching them to Raspberry Pis or to tablets will not work either. I will have to try to get hold of additional computers to install the wanted setting with three leaps.

Video Documentation: User Tests

Video Documentation: User Tests from Jones Merc on Vimeo.

User tests showed that users direct their gestures mostly towards the object, they want to control. A direct dialogue between the user and the object is obvious.
It also showed some interesting insights about how they interact with the gesture lamp. All of them found one of the On/Off gestures pretty quickly, but were puzzled if they came across the other which rather confused them instead of helping them.
Another interesting thing said, was that if a user is controlling sound, the source is less evident than a light (lamp) or a wind (fan) source. Sound/Music is rather surrounding us and therefore a gesture may not be directed that clearly to the object (music-player) itself.
The tests sure make me rethink some interaction steps and help to develop a smoother interaction flow.

Setback: Leap upside down without good results

Setback

Unfortunately testing a Leap Motion upside down hanging from a lamp pointing downwards does not show the same accuracy as if placed in upright position. I therefore need to rethink exhibition layout.

User Testing

User-Testing-Prototype-1

How would you interact with three objects if you had to control them via gestures?

This and other questions are asked in the current user testing phase, where I want to see how people interact with multiple objects (so far they mostly direct their gestures into the object’s direction).

First gesture lamp prototype

Video Documentation: Gesture Lamp Prototype 1 from Jones Merc on Vimeo.

By connecting a relay with the internet (shiftr.io, Arduino Yun) I am able to control a lamp via gestures using a Leap Motion to detect movements. This prototype can now serve for further user testing to see how people will interact and if they discover the control-gestures by themselves.

Relay connected to the web

Relais_Arduino_Internet

To build a lamp prototype which is controllable via gestures I built a relay attached to a normal extension cable. It can connect to the internet and by usingShiftr.ioI can send commands from different sources.
Next step will be to use my Leap Motion to send those commands.

Vector geometry matters

Dot_product

Always nice to see that mathematics were not in. Just updated my knowledge about the dot product to calculate angles between two vectors.

First electronics

Relay-Board

Relay board schematic to control a common light bulb (or anything attached to a 230V socket) with an arduino.

Javascript Patterns

javascript_patterns

javascript_patterns_closeup

For coding my gesture recognition and setting up a user flow, I decided to dive deeper into the universe of javascript to improve my general understanding for the programming language and use proven programming patterns in my code to keep it clean, slick and maintainable.

Mini Interaction Flow

mini_flow

A first quick interaction flow where a user is taught how to say OK and how to cancel a dialogue with an object. Find the whole user flowhere.

Interaction System: 1-2

1-2-Managing-and-Entertaining

1-2 Managing and Entertaining
The output of a self-regulating system becomes input for a learning system. If the output of the learning system also becomes input for the selfregulating system, two cases arise. The first case is managing automatic systems, for example, a person setting the heading of an autopilot—or the speed of a steam engine. The second variation is a computer running an application, which seeks to maintain a relationship with its user. Often the application’s goal is to keep users engaged, for example, increasing difficulty as player skill increases or introducing surprises as activity falls, provoking renewed activity. This type of interaction is entertaining—maintaining the engagement of a learning system. If 1-2 or 2-1 is open loop, the interaction may be seen as essentially the same as the open-loop case of 0-2, which may be reduced to 0-0.

Source: Dubberly Hugh, Pangaro, Haque.«What is Interaction?
Are There Different Types?»
. 2009. ACM 1072-5220/09/0100

Prototype Sketch: Radio Object

Object-Prototype-Sketch-v1.0

First sketch of the smart radio as controllable object via gestures. The object should remind of a radio to indicate the functions but the controls happen via gestures. The dialogue how a user get’s to know the different interaction inputs is a key feature in the development of this object prototype.

Testing a dialogue between object and user

Video Documentation: Object – Wizard of Oz from Jones Merc on Vimeo.

A nonfunctional prototype (object) in a «Wizard of Oz»-test-setup. A nearby computer allows to write the text which is subsequently displayed on the object’s screen. Without having to program a smart object with a fully functioning gesture recognition one is able to test different scenarios like this dialogue between user and object. Focus of the dialogue is how to slowly establish a gesture language without presenting it in the first place to the user but rather developing it in a dialogue between the two.

Object Dialogue

Object-Wizard

The object above is a prototype for a smart computer, that is enabled to regulate things for the user. The user can interact with it via gestures and for simplicity the object has a display to «speak».
This object allows to test different scenarios with the«Wizard of Oz Method». The object just displays text which I am writing at a nearby computer. I can thereby present the object to a possible user and let him interact with it, controlling the objects feedback by myself.

The Object

The-Object

Simulating a smart (and learning) object with a smartphone and alive code editor. Next step would be to build a 3d object with this smart device simulation mode.

Teach Leap New Gestures

The framework LeapTrainer.js allows to teach a programme new gestures. In the video it seems that it works nicely, in reality it does not work that well. Improving the learning mechanism would be crucial to use it in my project.

The corresponding code can be found ongithub.

What’s your Gesture?

Video Research: What’s your Gesture from Jones Merc on Vimeo.

To see if gesture trends can be found I asked multiple people to perform a gesture to each of eleven terms. They were asked to only use one hand and did not have time to think about the gesture in beforehand. I found that in some cases the results were pretty predictable and most of the gestures were alike. Other terms provoked a high diversity of different gestures and sometimes very creative gestures were used.

Another finding was that the interface that would be controlled with such gestures would directly influence the gesture itself. A lot of people asked how the object to control would look like and said that they may would have come up with different gestures if the’d seen the object or the interface respectively.

To see the full length gesture video gohere.

Classification of Gestures

classification of gestures

Interesting classification of gestures in subcategories:

  • pointing– used to point object or indicate direction.
  • semaphoric– group which consists of gesture posture and dynamics of gesture, which are used to convey specific meanings. Example: Swipe Gesture.
  • iconic– used to demonstrate shape, size, curvature of object or entities.
  • pantomimic– used to imitate performation of specific task or activity without any tools or objects.
  • manipulation– used to control the position, rotation and scale of the object or entity in space.

Source: Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh.Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Technical report, November 2012.

Merge Gesture Prototype

Video Documentation: Merge Gesture Prototype with Leap Motion from Jones Merc on Vimeo.

A small programmed prototype showing how to use a «new» gesture to merge something. Application fields are still vague, but this could be a way to diversify interaction design of multi-use devices like smartphones or computer. Using specific gestures to initiate a certain feature of an app could also strongly enhance productivity.

Affording Gestures

Can we designaffordances for gestureswithout a tangible object to interact with? Can interfaces be built with more or less clear indications how a gesture should be done? This question is strongly linked to the questionhow one can teach a user a (new) gesture-language.
An article about affording horizontal swipe gestures on touch screens is foundhere.From personal experience I can say that (either because it’s true affordance or because user learned how to interact with such interfaces) the depicted interfaces indeed show some swiping affordances. Can this also be achieved for mid-air micro-gestures?

Leap Motion First Test

Leap-Test

Use a Micro-Gesture to start an application’s feature

Video Sketch: Start Shazam’s Listening Feature via Gesture from Jones Merc on Vimeo.

What if one could define specific gestures to start specific features of an application.
This would be a diversification of the smartphones interface (see also:Standardised vs. Diversified Interaction), because one could skip all the buttons which one would normally need to navigation into the app and to a particular action or a feature.
In the video sketch I show how it could feel if the cupping-your-hand-gesture would initiate shazam’s song-listening feature. Especially in that use-case one is glad if as little time as possible is needed to start the listening function (otherwise the song may be over already).

Standardised vs. Diversified Interaction

Diversificated-Interaction-Process_normal

Diversificated-Interaction-Process_diversified

Highly standardised multi-functional devices like smartphones or computer sometimes need quite tedious actions to finally get to the action/control you want to execute. Because it’s multifunctional touch-buttons (in the case of a smartphone) need to tell the device what application and what action should be executed. It narrows down the options continuously.

If such devices could also be controlled in a diversified way (concerning interactions) for example via micro-gestures which offer far more possibilities, one could skip lots of steps. One specific gesture could mean:Go to app A, start feature B and choose option C.

Of course further question arise within that use case, for example what happens if a gesture is too close to a everyday life gesture and may start a process unintentionally.
For that case a preceding gesture could solve the problem. Just like saying «OK Google» initiates voice control in google services a very specific and unique gesture could start gesture recognition.

Interaction Diversification

Computers (including smartphones) are multifunctional devices serving a lot of different applications. Therefore they arehighly standardised in matters ofinput and output media andinteraction design(to guarantee high efficiency). There is little opportunity to design interaction (Joep Frens 2006).

One of the questions addressed in this project asksif it is possible to diversify the interactions with computers/smartphoneswith the use of gestures or micro-gestures respectively.

A fewadvantagesof that scenario:
– Operations can be completed much quicker (see also:Standardised vs. Diversified Interaction.
– Operations don’t require immediate spatial proximity (one does not have to hold a smartphone in your hands)
– The number of possibilities to design interactions is highly multiplied.
– The effort to operate can be drastically lowered.
– Has similar advantages like voice control, but can be much more convenient in numerous situations.

Gesture Camera

Video Sketch: Gesture Camera from Jones Merc on Vimeo.

An example for an micro-gesture application in the real world. Instead of using buttons Joep Frens proposed rich interactions to operate a camera. I took the exact same context and applied possible gestures to this scenario.

Definition: Micro-Interactions

Microinteractions differ from features in both their size and scope. Features tend to be complex (multiuse case), time consuming, and cognitively engaging. Microinteractions on the other hand are simple, brief, and should be nearly effortless. A music player is a feature; adjusting the volume is a microinteraction inside that feature.

A definition of a micro-interaction in comparison with the bigger feature from:
Saffer, Dan.«Microinteractions». Beijing: O’Reilly, 2013.

The text further describes whatmicrointeractionsaregood for:
• Accomplishing a single task
• Connecting devices together
• Interacting with a single piece of data, such as a stock price or the temperature
• Controlling an ongoing process, such as changing the TV channel
• Adjusting a setting
• Viewing or creating a small piece of content, like a status message
• Turning a feature or function on or off

Definition of Human-Product Interaction

Interaction: the relation, in use, between a product and its user mediated by an interface.

Joep Frens definition for human-product interaction.

Source: Joep Frens.«Designing for Rich Interaction: Integrating Form, Interaction, and Function». Eindhoven University of Technology. 2006
Designing for Rich Interaction

Video Sketch: Movie Scrubbing

Video Sketch: Movie Scrubbing via Micro-Gesture from Jones Merc on Vimeo.

The video sketch attempts to convey the feeling of controlling a video via micro-gestures. Grabbing the playhead and sliding it back and forth.
Such a scenario could be used in presentations for example, where it’s inappropriate to head over to an attached computer and move the playhead via mouse or trackpad. To further explore the possibilities brought by such an interaction a prototype is indispensable.

Micro-gestures Library Video Sketch

Video Sketch: Micro gestures library from Jones Merc on Vimeo.

The video is an attempt to visualize the most important and easiest micro-gestures. During the making a few things got clear.
– First a gesture must preferably consist of a natural hand position. Otherwise executing the gesture is getting strenuous over time.
– Second, some points did not get clear by just executing the gesture. For example it will only get clear how useful and how exact those movements are to control machines or processes if a working prototype is available. (Is the scrolling gesture exact enough but still offers the possibility to scroll through a whole document quickly?)

All in all this video can still serve as a basic library and as a starting point to develop some gestures further in the prototyping step.

Micro-gesture Definition

A micro-gesture (microgesture) is a gesture that is created from a defined user action that uses small variations in configuration, or micro-motions. In the most general sense, micro-gestures can be described as micro-interactions. Being that micro-gestures use small variations in configuration and motion, gesture actions can be achieved with significantly less energy than typical touch, motion or sensor-enabled gestures. This has great benefits for users as it allows effortless interactions to occur in rapid succession which directly supports greater fidelity and productivity within natural user interfaces.

Is a definition for «micro-gesture» found onhttp://www.gestureml.org/doku.php/gestures/fusion/microgesture_index.

I want to highlight the point that micro-gestures are very easy to perform and I want to add that they also have theadvantageof being very unobtrusive.

Nod – Micro-gestures with a Ring

Do we need a ring for such interactions? Maybe now, but in the future?
And what possibilities of new interactions does such a device offer?

Exhibition Layout Proposal

Exhibition-Layout

A stand where micro-interactions/micro-gestures are recorded is centered in front of the installation itself. Left of it a table with the computer (doing the computing and showing the concept video) and the printed thesis is placed.

Googles Gesture Book

Google-Touch-Gestures

Google describes touch gestures on thiswebpage. Touch gestures are divided into touch mechanics and touch activities. It this list complete?…

A List of Micro-Interactions in Augmented Reality

Gestures

A list of commonly known finger interactions. Haptic feedback is even in virtual reality possible by manipulating a real world object.

Source: Wolfgang Hürst & Casper van Wezel.«Gesture-based interaction via finger tracking for mobile
augmented reality».2012
PDF

Futile Interactions

Video Research: Unguided interactions from Jones Merc on Vimeo.

People trying to interact with the touch interface. Unluckily in almost all cases the interactions seem random. So the following question arises: How to guide the user without a graphical user interface indicating which interactions are possible?

What if We Could Start our Apps

Pantomime Gesture Actions

What if we could start our apps on our computer, which we use everyday (email-app, browser, editor,… you name it) via microgestures. Just as if you would depict your actions in a pantomime…
Another approach could include dumb ojects as interfaces. In the sketch above I imagined a miniature mailbox, which could be opened a bit. That very action would then initiate the launch of the mail app on a computer.

What if…

  • What if we try to move away from button-focused interfaces
  • What if technology could register the smallest microgestures of a users hand wherever he is
  • What if rich interaction interfaces would rule the world
  • What if dumb objects could turned into interfaces on the fly and could be used to control machines, processes and systems

Video Research: Current solution for axis control

Video Research: Axis control interaction from Jones Merc on Vimeo.

The shown device allows to control X, Y, Z axis individually. I’m asking, if it would be possible to use gestures or a tablet-gesture combination to control the three dimensions in a better way. If focused on gestures only, all three dimensions could be controlled at the same time by moving a hand in space and changing the values for X, Y and Z coordinates.

Video is accessed with: “interaction”

Links

This post contains a list of interesting and important links for this project.
I will keep this post at the end of the blog