Detailed SESSION INFORMATION



JULY 16, 2020: 8-9:15 AM EDT (UTC -4)

SESSION CHAIRS: Dr. Steven Kemper & Dr. Carla Caballero Sánchez

 

8-8:15
Stories About the Future: An Initial Dataset Exploring How Co-movement with Robots Affects Perceptions About Robot Capability

 

Catie Cuan, Joseph Hoffswell, and Amy LaViers

 

ABSTRACT

Anxiety about automation of large classes of jobs creates an area of research around how to evolve the workforce in parallel to advances in robotic technology. Gaining meaningful experience with robots, such as studying them in school, is not an option for every American, leaving media and stories to fill the void. This paper first presents analysis of popular narratives about robots, finding largely negative and violent depictions in popular movies. Then, the paper reports on an initial experiment with human participants on existing attitudes about robots and how those may change with meaningful, non-narrative exposure to these machines. A pilot study with 12 participants was designed and deployed in a targeted community. Initial findings, along with directions for future work, are discussed. The accessible, exhibit-like design of this work, may be a scalable framework that can make it possible for more people to gain real-life experiences with robots.

 

8:15-8:30
Let's Resonate! How to Elicit Improvisation and Letting Go in Interactive Digital Art

 

Jean-François Jego and Margherita Bergamo

+ Abstract


Participatory art allows for the spectator to be a participant or a viewer who is able to engage actively with interactive art. Real-time technologies offer new ways to create participative artworks. We hereby investigate how to engage participation through movement in interactive digital art, and what this engagement can awaken, focusing on the ways to elicit improvisation and letting go. We analyze two Virtual Reality installations, "InterACTE" and "Eve, dance is an unplaceable place,” involving body movement, dance, creativity and the presence of an observing audience. We evaluate the premises, the setup, and the feedback of the spectators in the two installations. We propose a model following three different perspectives of resonance: 1. Inter Resonance between Spectator and Artwork, which involves curiosity, imitation, playfulness and improvisation. 2. Inner Resonance of Spectator him/herself, where embodiment and creativity contribute to the sense of being present and letting go. 3. Collective Resonance between Spectator/Artwork and Audience, which is stimulated by curiosity, and triggers motor contagion, engagement and gathering. The two analyzed examples seek to awaken open-minded communicative possibilities through the use of interactive digital artworks. Moreover, the need to recognize and develop the idea of resonance becomes increasingly important in this time of urgency to communicate, understand and support collectivity.

 

8:30-8:45
Designing Glitch Procedures and Re-visualisation Strategies for Markerless Live Motion Capture of Contemporary Dance

 

Stephan Jürgens, Nuno N. Correia, and Raul Masu

 

+ Abstract


This paper presents a case study in the exploration and creative usage of errors and glitches in the real-time markerless motion capture of contemporary dance. We developed a typology of MoCap failures comprised of seven categories, allowing the user to situate each distinct error in the respective stage of the motion capture pipeline.This way, glitch procedures for the creative use of ‘bad’ MoCap data were designed, resulting in uncommon avatar visualisations.We propose an additional ‘re-visualisation’ module in our motion capture pipeline and avatar staging approach, which enables choreographers and digital artists to rapidly prototype their ideas in a mixed reality performance environment. Finally, we discuss how our extended MoCap pipeline and avatar staging set-up can support artists and researchers who aim at a flexible and adaptive work flow in real-time motion visualization.

 

8:45-9
MoViz: A Visualization Tool for Comparing Motion Capture Data Clustering Algorithms

 

Lucas Liu, Duri Long, and Brian Magerko

 

+ Abstract


Motion capture data is useful for machine learning applicationsin a variety of domains (e.g. movement improvisation, physicaltherapy, character animation in games), but many of these domains require large, diverse datasets with data that is difficult to label. This has precipitated the use of unsupervised learning algorithms foranalyzing motion capture datasets. However, there is a distinct lackof tools that aid in the qualitative evaluation of these unsupervised algorithms. In this paper, we present the design of MoViz, a novel visualization tool that enables comparative qualitative evaluation ofotherwise “black-box” algorithms for pre-processing and clusteringlarge and diverse motion capture datasets. We applied MoViz to the evaluation of three different gesture clustering pipelines usedin the Lumin AI improvisational dance system. This evaluation revealed features of the pipelines that may not otherwise have been apparent, suggesting directions for iterative design improvements.This use case demonstrates the potential for this tool to be used byresearchers and designers in the field of movement and computingseeking to better understand and evaluate the algorithms they are using to make sense of otherwise intractably large and complex datasets.

 

9-9:15
MotionHub: Middleware for Unification of Multiple Body Tracking Systems

 

Philipp Ladwig, Kester Evers, Eric J. Jansen, Ben Fischer, David Nowottnik, and Christian Geiger

+ Abstract


There is a substantial number of body tracking systems (BTS), which cover a wide variety of different technology, quality and price range for character animation, dancing or gaming. To the disadvantage of developers and artists, almost every BTS streams out different protocols and tracking data. Not only do they vary in terms of scale and offset, but also their skeletal data differs in rotational offsets between joints and in the overall number of bones. Due to this circumstance, BTSs are not effortlessly interchangeable. Usually, software that makes use of a BTS is rigidly bound to it, and a change to another system can be a complex procedure. In this paper, we present our middleware solution MotionHub, which can receive and process data of different BTS technologies. It converts the spatial as well as the skeletal tracking data into a standardized format in real time and streams it to a client (e.g. a game engine). That way, MotionHub ensures that a client always receives the same skeletal-data structure, irrespective of the used BTS. As a simple interface enabling the user to easily change, set up, calibrate, operate and benchmark different tracking systems, the software targets artists and technicians. MotionHub is open source, and other developers are welcome to contribute to this project.