Detailed SESSION INFORMATION
JULY 15,2020: 8:30-9:30 AM EDT (UTC -4)
SESSION CHAIRS: Dr. Vilelmini Kalampratsidou & Dr. Amy LaViers
8:30-8:45AM
Feasible Stylized Motion: Robotic Manipulator Imitation of a Human Demonstration with Collision Avoidance and Style Parameters in Increasingly Cluttered Environments
Roshni Kaushik, Anant Kuman Mishra, and Amy LaViers
+ Abstract
Socially intelligent robots are a priority for large manufacturing companies that want to deploy collaborative robots in many countries around the world.This paper presents an approach to robot motion generation in which a human demonstration is imitated, collisions are avoided, and a “style” is applied to subtly modify the feasible motion.The frame work integrates three subsystems to create a holistic method that navigatesthe trade-off between form and function.The first subsystem uses depth camera information to track a human skeleton and create a low dimensional motion model.The second subsystem applies these angles to a simulated UR3 robot, modifying them to produce a feasible trajectory.The generated trajectory avoids physically infeasible configurations and collisions with the environment, while remaining as close to the original demonstration as possible.The final subsystem applies four style parameters, based on prior work using Lab an Effort Factors, to endow the trajectory with a specific“style”.This approach creates adaptive robot behavior in which one human demonstration can result in many subtly different robot motions. The effectiveness of the hybrid approach, which considers functional as well as expressive goals,is demonstrated on three environments of increasing clutter. As expected, in more cluttered environments, the desired imitation is not as pronounced as in unconstrained environments. Potential applications of this framework include programming robot motion on a factory floor with greater efficiency as well as creating feasible motion on multiple robots with a single demonstration. This quantitative work highlights the Function/Expression duality named in the Laban/Bartenieff Movement System, illuminating how the arts are critical for “practical” spaces like the factory.
8:45-9:00
Neural Connectivity Evolution during Adaptive Learning with and without Proprioception
Harshit Bokadia, Jonathan Cole, and Elizabeth Torres
+ Abstract
Understanding brain connectivity patterns that may spontaneously emerge in response to biofeedback training remains of great interest to neuroscientists. Along those lines, Brain Computer Interfaces (BCI) mediated by EEG signals that dynamically evolve as the user attempts to control a cursor on the screen, has helped identify brain areas recruited during the learning process. There is an adaptive process that takes place between the computer algorithm and the solution that the brain arrives at to mentally control the instructed cursor direction through intentional thoughts. Using new personalized techniques, we here address how different participants learn during this co-adaptive process, in which bodily motions are curtailed in favor of mental motion. First, the person uses mental imagery of directional movements to attempt the cursor control, but as the computer algorithm and the brain work together to gain accuracy, this mental imagery reportedly reaches a different level of abstraction to the point when the participants are mentally controlling the external computer cursor, yet no longer imagining the movement direction. We compared the evolution of a participant without proprioception owing to neuronopathy, to that of participants with intact afferent nerves and found fundamentally different patterns of activation. In the former, the connectivity patterns were far higher and distributed across the entire brain during the initial stages of learning, along with the changes across the learning stages being more pronounced in contrast to the other participants. We infer from this result that in the absence of kinesthetic reafference, heavy reliance on other senses like vision and hearing, may endow the brain with higher capacity to handle the excess cognitive load.
9:00-9:15
Person Identification Based On Sign Language Motion: Insights From Human Perception And Computational Modeling
Félix Bigand, Prigent Elise, and Annelies Braffort
+ Abstract
Previous research has shown that human perceivers can identify individuals from biological movements, such as walking or dancing. It remains to be investigated whether sign language motion, which obeys to other constraints than pure biomechanical ones, also allows for person identification. The present study is the first to investigate whether deaf perceivers recognize signers based on motion capture (mocap) data only. Point-light displays of 4 signers producing French Sign Language utterances were presented to a group of deaf participants. Results revealed that participants managed to identify familiar signers above chance level. Computational analysis of the mocap data provided further evidence that morphological cues were unlikely to be sufficient for signer identification. A machine learning approach aiming to evaluate the motion features that can account for human performance is currently being developed. First results of the model reveal high accuracy for signer identification based on the same stimulus material, even after having normalized for size and shape. The present behavioral and computational findings suggest that mocap data contain sufficient information to identify signers, and this beyond simple cues related to morphology.
9:15-9:30
Recognition of Laban Effort Qualities from Hand Motion
Maxime Garcia and Rémi Ronfard
+ Abstract
In this paper, we conduct a study for recognizing motion qualities in hand gestures using virtual reality trackers attached to the hand. From this 6D signal, we extract Euclidean, equi-affine and moving frame features and compare their effectiveness in the task of recognizing Laban Effort qualities. Our experimental results reveal that equi-affine features are highly discriminant features for this task. We also compare two classification methods on this task. In the first method, we trained separate HMM models for the 6 Laban Effort qualities (light, strong, sudden, sustained, direct, indirect). In the second method, we trained separate HMM models for the 8 Laban motion verbs (dab, glide, float, flick, thrust, press, wring, slash) and combined them to recognize individual qualities. In our experiments, the second method gives improved results. Together, those findings suggest that low-dimensional signals from VR trackers can be used to predict motion qualities with reasonable precision.