AACT: Mobile Based Interaction Metaphor for Augmented Reality
Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu
Clemson University, USA
In this paper, we present our solution to the IEEE 3DUI 2017 contest. Our solution employs multi-finger touch gestures along with the in-built camera and accelerometer on a mobile device for interaction in an augmented reality setup. We leverage a user’s knowledge of everyday touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment. The interaction metaphor allows for successful art piece creation and assembly.
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati
Politecnico di Torino, Italy
Using digital instruments to support artistic expression and creativity has become an extremely actual topic. In this work we studied how to take advantage of augmented reality for constructive art. Our goal is to provide artists a clean workspace, in which the scene follows the augmented reality (AR) paradigm, usable through a touch-capable mobile device.
Several issues should be taken into account when designing 3D user interfaces for constructive art: how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, how much moving away from “the real” and going towards “the virtual”. The proposed solution exploits a set of virtual tools for emulating a situation and environment of manual work: this is achieved by using printed markers that user can freely bring into camera’s field of view.
Furthermore, we implemented a custom solution to manipulate virtual objects in six degrees of freedom (6DOF).
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo
University of São Paulo, Brazil
In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate relative positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Context, as validated by a preliminary informal user study with participants from the target audience and also from the general public.
SculptAR: An Augmented Reality Interaction System
Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Marcio Sarroglia Pinho
In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques.
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Sabarish V. Babu, and Elham Ebrahimi
Clemson University, USA
We present our metaphor for object translation, rotation, and rescaling in an augmented reality environment using an Android smartphone or tablet for the 24th IEEE Virtual Reality Conference in Los Angeles, California. Specifically, for the IEEE 3DUI competition. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, and rescaling and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive.
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel
Federal University of Rio Grande do Sul, Brazil
Artanim Foundation, Switzerland
Interaction on augmented reality environments may be a very complex task, depending on the degrees of freedom required for the task. In this work we present a 3D user interface for collaborative manipulation of three-dimensional objects in augmented reality (AR) environments. It maps position – acquired with a camera and fiducial markers – and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple degrees of freedom (DOFs), collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly in the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user.
T4T: Tangible Interface for Tuning 3D Object Manipulation Tools
Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati
Politecnico di Torino, Italy
A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios is presented. The 3DUI involves sensors, touch screen, well-known gestures of handheld device and provides 3D interaction techniques for selection, integrated 3DOF positioning and integrated 3DOF rotating. The proposed solution takes advantage of two techniques for manipulation of 3D items, involving the use of a cursor and a tangible interface. The first (“cursor mode”) uses a cursor, which position and movement are bind to the view of the device; this cursor allows the user to select objects and to perform fast and coarse positioning. The latter (“tuning mode”) involves the use of a tangible interface that provides to user the possibility to refine the objects in all their aspects (position, rotation, scale, color, and so forth). Users are allowed to switch among modes to operate in the scene, thus manipulating objects with the preferred method.