Sketchable Interaction

Status: ongoing

Runtime: 2018 -

Participants: Jürgen Hahn, Raphael Wimmer

Keywords: Interaction Techniques, Interaction Design, Computer Vision

Goal

Development of a framework which allows easy access to robust detection, tracking and digitisation of physical documents or devices in combination with the affordances of virtual windows, files etc, so that application developers can build and evaluate interaction techniques.

Figure 01: Digital twin of a physical document.


News / Blog

PDA Group at CHI '2018 (2018-04-21)

We will present a poster and a workshop paper at CHI 2018. (more...)


Status

Users are can sketch interactive regions by using their fingers as a brush. They can assign a desired effect to the brush via a context menu triggered by the detection of their hands. The defined region then applies this effect to a colliding object. For example, users want to define a Send-via-Email region, so they choose this effect for their brush via the hand-context-menu. Then, they sketch a region onto the surface and drag the file's icon over the drawn region, in order to send the file to the defined person.

The current explorative prototype supports five types of interaction possibilities:

  • Seamless Zoom
    • a file icon gets dragged on such a region and its content is seamlessly rendered readable
  • Region Delete
    • delete an undesired region by selecting it with your hand
  • Send-via-Email
    • email a physical document by dragging it onto such a region
    • email a digital file by dragging its icon onto such a region
  • Storage once a eligible object is dragged onto such a region
    • digitize a physical document, in order to create a digital twin and visually emphasize their link
    • print a new physical document based on the file's contents
  • Conveyor Belt
    • allow users to automate simple tasks by defining such regions and connecting them with other regions
    • allow users to temporarily store objects in a looped conveyor belt

It is meant to enable deeper thinking about the SI concept.

Prototype (July 2018)

Background

In order to implement and research possible interaction techniques for physical-digital workflows and workspaces, an application developer friendly framework is required for fast user, hardware or software testing iterations. The framework's first iteration is targeted for the Samsung SUR40 Multi-Touch Table (MTT) using its camera-pixels in order to generate a 960×540 surface image of the otherwise 1080p display. This image is to be evaluated for markers, text, etc. in order to proof interaction technique concepts and potential new input modalities, like digitally stamping / tagging physical documents with a tangible interaction device.

Used Technologies:

  • Samsung SUR40
  • Custom Debian Driver by Florian Echtler
  • OpenCV 3.2.0
  • ARuco Markers
  • Custom Arduino-based Tangible Interaction Devices
  • Input Devices
  • TUIO2 by Martin Kaltenbrunner

Future Extensions:

  • Utilise a 4K Projector in order to visualise a workspace combining physical-digital affordances
  • Utilise a Depth-Camera or Stereo-Camera setup and order to track Paper from above
  • combine both approaches

Current Work April 2019

Based on this explorative prototype several (smaller) projects in regard of the SI concept have been identified and are being worked on.

These projects are:

  • Define the concept of Sketchable Interaction via its two core interaction concepts: collision and linking
  • Specify a formal language which describes a SI context
  • Build a SI Runtime or Engine acccepting 3rd party interactive region definitions and effects
  • Motivate and qualify the chosen tools for building the SI Engine by measuring the end-to-end latency of programming languages and application frameworks and finding the best trade-off between performance and tool features

The ideas, progress and general thoughts towards are shown in future blog post on this site.

Publications

Raphael Wimmer, Jürgen Hahn

Workshop *Rethinking Interaction* in conjunction with ACM CHI 2018

Users can define custom workflows by drawing regions on the desktop that determine how objects within these regions - such as digital documents or windows - behave. (Tweet this with link)