Detailed SESSION INFORMATION


JULY 17,2020: 8:30-9:45 AM EDT (UTC -4)

SESSION CHAIRS: Dr. Antonia Zaferiou & Dr. Aurie Hsu

Welcoming remarks by Dr. Jason Geary, Dean, Mason Gross School of the Arts, Rutgers University

 

8:30-8:45
Sonification of Heart Rate Variability can Entrain Bodies in Motion

 

Vilelmini Kalampratsidou and Elizabeth Torres

 

+ Abstract


In this work, we introduce a co-adaptive closed-loop interface driven by audio augmented with a parameterization of the dancer's heart-rate in near real-time. In our set-up, two salsa dancers perform their routine dance (previously choreographed and well-trained) and a spontaneously improvised piece lead by the male dancer. They firstly dance their pieces while listening to the original version of the song (baseline condition). Then, we ask them to dance while listening to the music, as altered by the heart rate extracted from the female dancer in near real-time. Salsa dancing is always led by the male. As such, their challenge is to adapt, their movements, as a dyad, to the real-time change induced by the female's heart activity.

Our work offers a new co-adaptive set up for dancers, new data types and analytical methods to study two forms of dance: well-rehearsed choreography and improvisation. We show that the small variations in heart activity, despite its robustness for autonomic function, can distinguish well between these two modes of dance.

 

8:45-9
Human-Sound Interaction: Towards a human-centred sonic interaction design approach

 

Balandino Di Donato, Christopher Dewey, and Tychonas Michailidis

+ Abstract


In this paper, we explore human-centered interaction design aspects that determine the realisation and appreciation of musical works (installations, composition and performance), interfaces for sound design and musical expression, augmented instruments, sonic aspects of virtual environments and interactive audiovisual performances. In this first work, with the human at the centre of the design, we started sketching modes of interaction with sound that could result direct, engaging, natural and embodied in a collaborative, interactive, inclusive and diverse music environment. We define this as Human-Sound Interaction (HSI). To facilitate the exploration of HSIs, we prototyped SoundSculpt, a cross-modal audio, holographic projection and mid-air haptic feedback system. During an informal half-day workshop, we observed that HSIs through SoundSculpt have the potential to foster new ways of interaction with sound and to make them accessible to diverse musicians, sound artists and audience.

 

9-9:15
Tap Dance as Medium for Composition: Notation and Technology

 

Jacob Thiede

 

+ Abstract


While there are sources for preserving and documenting movement, there are not many clear options for memorization and disseminating notation for tap dance. Two sources are more widely known: Labanotation and Kahnotation. The former attempts to use a diagram of the human body to express how it should move over time. The latter uses images sequentially to convey movement. Labanotation is typically used for more classical and modern genres like ballet and contemporary dance. Kahnotation was devised specifically for tap dance. While both are no more complex than music notation, they are vastly different and do not crossover into how a musician understands reading music. This paper aims to comprehensively analyze both notation systems by comparing and contrasting with modern music notation. Additionally, I propose a new form of notation for both tap dancers and musicians which is inspired by elements of rudiments for percussion. Ultimately, I apply three suggested methods for composing for tap dance in addition to creating a unique Max 4 Live device which allows tap dancers to actively interact with the computer (Ableton) and change tempo with effortless precision.

 

9:15-9:30
Virtually Constrained Dancing: Encoding Language in Movement and Sound

 

Devon Frost, Shannon Steele, and Lucas Bang

 

+ Abstract


This paper presents the development of the TED (Tap Encoding Decoding) program with results and reflections on its usage. TED is a program to be used in partnership with a tap dancer for decoding tap dancing audio. To perform with TED, a tap dancer must execute their dance with steps that encode Morse code. This paper will elaborate on the processes by which TED was developed, including methodologies such as audio signal peak detection for tap dancing and audio decoding analysis. We also explore the relationship developed between the dancer and TED during experimentation and live performance. We draw upon notions of extended and embodied cognition to explain observations regarding the dancer's feedback driven adaptations to TED's outputs during their performance. The programmatic constraints introduced during partnership with TED result in novel choreographic challenges and impose atypical structure in improvisation, leading to unusual performance characteristics.

 

9:30-9:45
Somatic Sonification in Dance Performances. From the Artistic to the Perceptual and Back

 

Andrea Giomi

+ Abstract


Since the end of the 1980s, interactive musical systems have played an increasingly relevant role in dance performances. More recently, the use of interactive auditory feedback for sensorimotor learning such as movement sonification has gained currency and scientific attention in a variety of fields ranging from rehabilitation to sport training, neuroscience and product design. This paper investigates the convergence between interactive music/dance systems and movement sonification in the field of dance. The main question we address is whether the emergence of the notion of sonification can foster new perspectives for practice-based artistic research. In this context, we highlight a fundamental shift of perspective from musical interactivity per se to the somatic knowledge provided by the real time sonification of movement, which can be considered as a major somatic-sonification turn.