CollaborativeConstraint:
UI for collaborative 3D manipulation operations

Naëm Baron

Collaboration in virtual environments (VEs) is important as it offers a new perspective on interactions with and within these environments. We propose a 3D manipulation method designed for a multi-user scenario, taking advantage of the extended information available to all users. CollaborativeConstraint (ColCo) is a simple method to perform canonical 3D manipulation operations by mean of a 3D user interface (UI). It is focused on collaborative tasks in virtual environments based on constraints definition. The communication needs are reduced as much as possible by using easy to understand synchronization mechanism and visual feedbacks. In this paper we present the ColCo concept in detail and demonstrate its application with a test setup.

When the Giant meets the Ant An Asymmetric Approach for Collaborative Object Manipulation

Morgan Le Chénéchal (1), Jérémy Lacoche (1), Jérome Royan (1),
Thierry Duval (1,2), Valérie Gouranton (1,3), Bruno Arnaldi (1,3)

(1) IRT bcom
(2) Télécome Bretagne / LABSTICC
(3) INSA Rennes / Irisa / Inria

For the 3DUI Contest 2016, we propose an innovative approach that enables two or more users to manipulate an object collaboratively. Our solution is based on an asymmetric collaboration pattern at different scales in which users benefit from suited points of views and interaction techniques according to their device setups. Our system provides an efficient way to co-manipulate an object within irregular and narrow courses, such as the contest material scenes, taking advantages of asymmetric roles in synchronous collaboration.

Collaborative 3D Manipulation using Mobile Phones

Jerônimo G. Grandi (1), Iago Berndty (1), Henrique G. Debarba (2)
Luciana Nedel (1), Anderson Maciel(1)

(1) Institute of Informatics, Federal University of Rio Grande do Sul, Brazil
(2) Immersive Interaction Group, École Polytechnique Fédeérale de Lausanne, Switzerland

We present a 3D user interface for collaborative manipulation of three-dimensional objects in virtual environments. It maps inertial sensors, touch screen and physical buttons of a mobile phone into well-known gestures to alter the position, rotation and scale of virtual objects. As these transformations require the control of multiple degrees of freedom (DOFs), collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed in a single shared screen, which is handy to aggregate multiple users in the same physical space.

Batmen – Hybrid Collaborative Object Manipulation Using Mobile Devices

Marcio Cabral, Gabriel Roque, Mario Nagamura Andre Montes,
Eduardo Zilles Borba, Celso Kurashima, Marcelo Zuffo

Interdisciplinary Center in Interactive Technologies – Polytechnic School – University of São Paulo

In this work we present an interactive and collaborative 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows multiple users to collaborate concurrently on a scene. Each user interested in participating in this collaboration uses both a mobile device running android and a desktop (or laptop) working in tandem. The 3D scene is visualized by the user in the desktop system. The changes in the scene viewpoint and the object manipulation are performed using a mobile device through object tracking. Multiple users can collaborate on object manipulation by using a laptop and a mobile device each. The system leverages user’s knowledge of common tasks performed on current mobile devices using gestures. We built a prototype system that allows users to complete the requested tasks and performed an informal user study with experienced VR researchers to validate the system.

Collaborative Hybrid Virtual Environment

Leonardo Pavanatto Soares (1), Thomas Volpato de Oliveira (1), Vicenzo Abichequer Sangalli (1),
Márcio Sarroglia Pinho (1), Regis Kopper (2)

(1) School of Informatics, PUCRS
(2) School of Engineering, Duke University

Supposing that, in a system operated by two users in different positions, it is easier for one of them to perform some operations, we developed a 3D User Interface (3DUI) that allows two users to interact together with an object, using the three modification operations (scale, rotate and translate) to reach a goal. The operations can be performed using two augmented reality cubes, which can obtain up to 6 degrees of freedom, and every user can select any operation by using a button on the keyboard to cycle through them. To the cubes are assigned two different points of view: an exocentric view, where the user will stand at a given distance from the object, with a point of view similar to the one of a human being; and an egocentric view, where the user will stand much closer to the object, having the point of view from the object’s perspective. These points of view are locked to each user, which means that one user cannot use both views, just the one assigned to his ID. The cameras have a small margin of movement, allowing just a tilt to the sides, according to the Oculus’s movements. With these features, this 3DUI aims to test which point of view is better for each operation, and how the degrees of freedom should be separated between the users.

Ray, Camera, Action! A Technique for Collaborative 3D Manipulation

Wallace Lages
Center for Human Computer Interaction, Virginia Tech, U.S.A.
Universidade Federal de Minas Gerais, Brazil

In this paper we present a technique to support collaborative 3D manipulation. Our approach is based on two or more users jointly specifying the parameters of each transformation using a point, a ray, and a scalar value. We discuss how this concept can be coupled with a camera system to create a scalable technique that can accommodate both parallel and serial collaboration.