Sketchable Interaction

Status: ongoing

Runtime: 2018 -

Participants: Jürgen Hahn, Raphael Wimmer

Keywords: Interaction Techniques, Interaction Design, Computer Vision

Goal

Development of a framework which allows easy access to robust detection, tracking and digitisation of physical documents or devices in combination with the affordances of virtual windows, files etc, so that application developers can build and evaluate interaction techniques.

Figure 01: Digital twin of a physical document.


News

PDA Group at CHI '2018 (2018-04-21)

We will present a poster and a workshop paper at CHI 2018. (more...)


Status

Users are can sketch interactive regions by using their fingers as a brush. They can assign a desired effect to the brush via a context menu triggered by the detection of their hands. The defined region then applies this effect to a colliding object. For example, users want to define a Send-via-Email region, so they choose this effect for their brush via the hand-context-menu. Then, they sketch a region onto the surface and drag the file's icon over the drawn region, in order to send the file to the defined person.

The current prototype supports five types of interaction possibilities:

  • Seamless Zoom
    • a file icon gets dragged on such a region and its content is seamlessly rendered readable
  • Region Delete
    • delete an undesired region by selecting it with your hand
  • Send-via-Email
    • email a physical document by dragging it onto such a region
    • email a digital file by dragging its icon onto such a region
  • Storage once a eligible object is dragged onto such a region
    • digitize a physical document, in order to create a digital twin and visually emphasize their link
    • print a new physical document based on the file's contents
  • Conveyor Belt
    • allow users to automate simple tasks by defining such regions and connecting them with other regions
    • allow users to temporarily store objects in a looped conveyor belt

Prototype (July 2018)

Background

In order to implement and research possible interaction techniques for physical-digital workflows and workspaces, an application developer friendly framework is required for fast user, hardware or software testing iterations. The framework's first iteration is targeted for the Samsung SUR40 Multi-Touch Table (MTT) using its camera-pixels in order to generate a 960×540 surface image of the otherwise 1080p display. This image is to be evaluated for markers, text, etc. in order to proof interaction technique concepts and potential new input modalities, like digitally stamping / tagging physical documents with a tangible interaction device.

Used Technologies:

  • Samsung SUR40
  • Custom Debian Driver by Florian Echtler
  • OpenCV 3.2.0
  • ARuco Markers
  • Custom Arduino-based Tangible Interaction Devices
  • Input Devices
  • TUIO2 by Martin Kaltenbrunner

Future Extensions:

  • Utilise a 4K Projector in order to visualise a workspace combining physical-digital affordances
  • Utilise a Depth-Camera or Stereo-Camera setup and order to track Paper from above
  • combine both approaches

Publications

Raphael Wimmer, Jürgen Hahn

Workshop *Rethinking Interaction* in conjunction with ACM CHI 2018

Users can define custom workflows by drawing regions on the desktop that determine how objects within these regions - such as digital documents or windows - behave. (Tweet this with link)


News / Blog

PDA Group at CHI '2018 (2018-04-21)

We will present a poster and a workshop paper at CHI 2018. (more...)