Since the iPhone introduced capacitive touch screens to the consumer market we have touch gestures all around. Gestures are movements that are recognized and trigger a function. Web OS uses this heavily e.g. for closing applications, Apple uses them e.g. for making buttons for deleting mails visible.
It is invisible in current gestural/touch interfaces that gestures can be done and it is invisible what they do as well. Users must know how to use this gestures beforehand. The “natural user interface” that it proclaims to be, sucks. Don Norman wrote about this already a time ago, so actually I don't tell anything new.
There are no ways I know of that introduce visibility to touch-gesture-interfaces. So I made up my mind.
First the user needs to know that a gesture can be done and in which way. For signifying this, I used an element which is already a standard for mouse based interfaces: On scrollbars and at the corners of windows a "rough" structure shows that these elements can be moved. This has been around for some time. So in my designs this visual expression as signifier for elements a gesture can be performed on. Note that this as well indicates the direction of the gesture as the riffle runs opposite to the movement's direction.
Now the user still does not know what the gesture will cause – e.g. does it delete an item or starts reordering elements? – and when – is a short movement that will cause the action or a longer one? I don't have a solution to show this beforehand. But at least this can be shown directly after the gesture is initiated: Close to the element but not obscured by the finger a button is shown – first transparent, after further execution of the gesture in greater contrast. When the distance threshold is exceeded, the button appears to be pressed. This again draws on the experiences with common interfaces and standards. Try it yourself: demo.
I am not totally satisfied: The gestures can't be more advanced and the result is not clear before the gesture is started. Any ideas?