The MotionComposer is a device that turns movement into music. It is being developed so that anyone, regardless of their abilities, can express themselves through movement and music.
The offical website is here: www.motioncomposer.org.
History
The MotionComposer is a device that turns movement into music. It is designed for persons with different abilities, including those with cerebral palsy, aphasia, autism, quadriplegia, blindness, Alzheimer’s or Parkinson’s. It is for small children, senior. The MotionComposer is a device that turns movement into music. It is designed for persons with different abilities, including those with cerebral palsy, aphasia, autism, quadriplegia, blindness, Alzheimer’s or Parkinson’s. It is for small children, senior citizens and everyone in between. The blink of an eye is enough to play a note! MotionComposer MotionComposer is a motion capture device that lets people make music with gestures. This is the presentation of our applied project, where we worked on building a new instrument for this device. Read more Dec 15, 2019. Eigil Aandahl, Elias Andersen, Karolina Jawad, Sam Roman Trippi BySykkel Sounds.
The project began in 2010 as a spin-off from the Palindrome Dance Company. At the main MotionComposer website you can see who we are, and who is sponsoring our work.
Product History
Motion Composers
We made a first version, the 2.0 (below), which we sold from 2014-16. Currently, we are working on the MC3.0. We believe it can can find an international market as tool for therapy, research, artistic creation, education and fun!
MC2.0
How Does the it Work?
Motion Composer Mac App
The MotionComposer uses a technology known as 'motion tracking'. By attaching video cameras to a computer, human position, shapes and movement are interpretted and then used to control music software inside the computer. The result is that movement makes music.
Of course, this is trickier than it sounds. A computer does now know what a human being is. If you look around you, light is changing all the time. There are reflections and shadows, maybe trees are moving out the window. We have to teach the computer to ignore all of these things, and concentrate on the human form. This is why the MotionComposer uses a special kind of video technology called 'passive stereo vision'. Two cameras, like two eyes, give a 3D image of the world and allow the software to separate the human form from the background.
Even after we have isolated the human form, there is still a lot of work to do. Human movement expression is complex. Some of our gestures and postures carry a lot of meaning, while others are insignificant and need to be ignored. The human mind does this quickly and easily, but again, computers are not so clever. We work with psychologists, choreographers -- i.e. movement expression experts -- composers and engineers on this interesting challenge.
We begin by breaking down human expression into four categories:
o Activity -- which body parts, how much and in which direction the movement is happening
o Shape -- for example, are you stretched tall, crouched low, or the position of your legs or arms in relation to your body
o Position -- where are you in the room
o Gesture -- these are special combinations of Activity and Shape that are used as a kind of communication, for example, 'Hit Downward' is a quick movement of the hand that we might use to play the sound of a drum
Each category is then further divided into left-side, right-side, upper body, lower body, etc. Thus, finally, the MotionComposer generates over 50 separate data streams relating to human expression.
Of course, we do not use so many parameters at one time! That would be much too confusing for the person playing the music. Normally only a few parameters are used for any given Musical Environment
The MotionComposer contains six such Environments. Each one is the creation of a different composer and each one uses different body expressions to control different kinds of music.
Some Technical Details
There are two kinds of MotionComposers: the MC2.0 and the MC3.0. They contain different software and hardware.
MOTIONCOMPOSER_2.0 (2014-2016)
HARDWARE
small format (ATX) computer (i7-level CPU)
separate sense-box (pictured above)
TOF (time of flight) based sensor
high resolution, low latency ethernet (CMOS) video
uses standard screen,mouse and keyboard
SOFTWARE
motion tracking based on EyesWeb (developed by InfoMus)
six interactive music environments:
'Particles' by Andreas Bergsland (written in CSound)
'Drums' by Andrea Cera (written in PD)
'Techno' by Marcello Lussana (written in SuperColider)
'Accents' by Pablo Palacio (written in SuperColider)
'Fields' by Giacomo Lepri (written in PD)
'Tonality by Adrien Garcia and Ives Schachtschnabel (written in PD)
MOTIONCOMPOSER_3.0(release ca. 2018-19)
HARDWARE
Sensor + Computer in an Integrated Chassis
passive stereo vision camera system
2x high resolution, low latency ethernet (CMOS) video cameras
tablet controller
optional interactive lights, for visual feedback
SOFTWARE
custom motion tracking by FusionSystems
control systems by MotionComposer gmbh
six interactive music environments:
'Particles' by Andreas Bergsland (written in CSound)
'Drums' by Andrea Cera (written in PD)
'Techno' by Marcello Lussana (written in SuperColider)
'Importer' to import your own music
'Fields' by Giacomo Lepri (written in PD)
'Tonality by Adrien Garcia and Ives Schachtschnabel (written in PD)
MC3.0 Software Structure
Design Principles
In the development of the MC, a number of design principles are engaged:
Inclusion - the device can be used equally by persons with and without disabilities
Synaesthesia - the device is both a musical instrument and a dance device
Artistically satisfying and/or entertaining - a quality music/movement experience, going beyond what a toy or gimmick can deliver
Inclusion has to be seen as our primary goal. We want users with and without disabilities to be able to make music -- alone or with others -- on an equal, or nearly equal footing. While this may sound utopian, it is not. In the simplest terms it means, 1) allowing many different body parts and kinds of movement to be used, 2), the device must be easy to operate, both for user and therapist, 3) it should sound pretty good however it is played, and 4) it must provide highly intuitive mapping and clear causality. Inclusion is also the main motivator behind our development of three different interaction modes, which we will discuss in detail below.
Synaesthesia refers to the confusion or overlapping of our senses. It is what happens when we “feel the music inside us” when we dance, or when our movements and the sound they cause become one and the same thing (this is discussed below in the section, “Musical Instrument or Dance Device”). In our experience, synaesthesia will strengthen users’ engagement, their feeling of engagement in and focus on the here-and-now and the causal connection between their own bodies and the sounds they hear. There are technical issues that can add to, or detract from a synaesthetic experience. One of these is latency. With a sufficient lag from movement to resulting sound, the user will tend to feel that movement and sound are two separated events, instead of one. Single shooter computer games can tolerate quite a high latency between, say, shooting, and when the monster is blown to bits. This is because what is important there is causality and not synaesthesia.
To be Artistically Satisfying or entertaining is one of the the most important and continuing challenges we have face. It is the artistic quality that enables the device to remain interesting over time. There are different reasons why a user might want to spend time with a musical environment: one is that, with practice, she or he develops skill and is better able to shape her or his movements to achieve a desired effect. If continuously presenting adequate challenges to a user, the experience of mastery can drive and retain interest and involvement. A second reason is that there is a variety of both music and interaction metaphors to explore. Our take on this has been to develop a set of environments, which from the user point-of-view are characterized by each playing different types of sounds and being based on different interaction metaphors. But we have also aimed for that even without switching environments, the musical responses should have variation and interest by themselves, so that an identical movement will not necessarily produce an identical sound. For instance, the exact repetition of a sound sample enabled by digital audio technology will quickly feel tiring and even irritating to the user, so introducing variants or avoiding repetition has been important. A third is that the music is beautiful to hear and we are able to make with appropriate-feeling beautiful movements -- in other words, we are seeking aesthetical experiences. While it is not always easy to pinpoint what triggers aesthetical experiences in each user, we are constantly aiming for this, thus setting goals that are just as much artistic as therapeutic in nature.
To encourage Inclusion, Synaesthesia and Artistic Satisfaction, we might think in terms of two metaphors: the musician, who very accurately controls small movements, and the dancer, who uses full-body movement to physicalize an artistic intent. Both seem valid to us, and in combining the two we seek a rich and varied user experience.
The three modes of use in the MC2
The MC has 3 modes of use labeled room, chair and bed.
User interface for the MC2.0
Six Musical Environments
Another consequence of our principle of inclusion, considering how different people tend to enjoy various types of music, is our offer of varied environments. Consequently, the chances are that most users can find something they find interesting, beautiful or engaging. So, in addition to the three modes of use, the user selects between six musical environments. Each environment offers different mappings and different styles of music. In addition, several of the environments have variants, with several sound banks or other settings, so that the overall musical potential of the device is rich and varied. In terms of musical genres or styles, we have implemented elements from classical, jazz, techno, latin, soundscape and electroacoustic music. We will now give an outline of basic mappings, metaphors and musical content of the four musical environments relevant in this context.
Tonality
The prevailing metaphor used in this environment is that of playing an instrument, and for most users, one that they are familiar with. The choice of instrument -- in the current version you can choose between piano, vibraphone and harpsichord -- can be set by the user or the therapist in the GUI (sitar, guitar and a moog-like synthesizer will be added to the next generation of the device). When it comes to the choice of notes from these instruments, it comes about through a combination of user input and features built into the system: Although the user chooses the approximate note value, or whether the notes are ascending or descending, the exact selection is controlled by the system, using algorithms to ensure that the notes are in accordance with an underlying musical logic and thus rendering a strong sense of tonality. On top of this musical intelligence, the user affects the dynamics (soft/loud), pitch range, whether chords are played or single notes and various kinds of articulation (arpeggios, scales, chords). The result is an environment that is ”musical” in a relatively traditional manner that can remind listeners of music in the classical and jazz idioms, but where the user can also feel that s/he is in some sense “playing” the music.
Particles
The Particles Environment is perhaps the most sonically most complex of the six. It is based on four sound worlds, each consisting of a large number of short samples, in which the user can orient him/herself in that the user’s movements in different zones trigger the sounds. Within each sound world, the samples are organized so that sounds sharing a similar characteristic or belonging to the same source category are contiguous. Moreover, the transitions between different groups or categories of sounds are made continuous, so that even if there is a pronounced change in quality, this change will still come about as a smooth and sonically continuous transition. The nature of the different sound worlds from which the user can choose, suggests different metaphors. In one, materials, the user can play the sounds of materials like glass, metal, water, wood and skin by navigating to different areas of the interaction space. In another sound world, Songshan Mountain, the user plays vocal sounds from a Chinese opera singer. The environment generally reacts in a very dynamic manner letting the size of movement control the density of the samples so as to vary from playing single particles, to chained sequences or even dense clouds. These loud masses of sound render a relatively abstract quality that can be far removed from the original. All-in-all, the large number of sounds gives the environment a sonic richness that can evoke interest and curiosity.
Techno
This environment is based on a contemporary popular dance metaphor, where the user is given an underlying beat to which s/he can dance. The system reacts to the user’s movement by making the music more active and engaging, so as to invite the user to keep dancing. This takes place in a few ways.
One of the most basic aspects of the techno genre is the groove. While it must never stop, at the same time it must be modulated and these modulations can be user-activated. When the user stands motionless before the MotionComposer a beat is heard, but when they “groove to the music” the kick (bass drum) comes in. This effect generally keeps the user in motion, bobbing up and down or shifting weight between legs. By bending low the music becomes low-pass filtered, a recognizable effect from the techno genre. Stretching high similarly offers a high-pass effect. Finally, melodic layers can be added and removed by extending the arms.
Motion Composer 3.0
Fields
This environment is relatively diverse since it includes both metaphors of narrativity/ impersonation, as well as playing a musical instrument and causing sonic events. The logic of the environment rests on a division of the interaction space into two side-by-side zones that can be preset by the user/therapist. In some of the fields the user plays animal sounds, thus enabling a game of impersonation or role-playing where the user “becomes” the animal. In others, the user can “play” different instruments or objects like drums and glass, or weather phenomena like “wind” and “rain”. Musically, this environment therefore has affinity with soundscape composition and an expanded notion of what sounds can be musical. The division of the interaction space into two distinct areas, which can be played simultaneously by two users, makes this environment ideal for duets, enabling, for instance, a “conversation” between a chicken and a frog. Actually, developing Fields made us aware of a number of benefits of having more than one user, something we will discuss further below.
Importer
Everyone has their favorite music. This environments lets the user import, via usb stick, their favorite song, or sound (such as the voice of a parent or close friend) into the MC. Movement will then play back the recording, according to how much the person moves.
Single vs. Multi-User
Even when we do them by ourselves, music and dance are in some sense concerned with performance; sharing the experience heightens the enjoyment. In the current version of the MC, only one of the six music environments, Fields, is implemented for multiple users. Based on positive experiences with having two users together in an environment in many of the later workshops, we have seen the need for porting the two-person mode to all environments, and this is currently in development.
Allowing two-person interaction also has the advantages of creative social and musical interaction, either involving a friend, colleague or therapist. As Eide (2014) points out, the dialogical perspective in music has become important to music therapists in recent decades, emphasizing co-experience and co-creation (p.122). In our work, we have experienced that games of imitation, mirroring and dialogue heighten the enjoyment for many users. The challenges that two-person interaction present to users -- most often this relates to problems of hearing who does what -- are often easily solved through focus and conscious guidance, and might offer the pedagogical benefit of making space in the interaction and listening to the other.
Gathering Experiences
Beginning in 2010, the MC Team has been seeking support to the claim that interactive digital movement-to-music technologies can play a role in affording dance and music engagement among highly diverse users, including those with severe disabilities. The work has taken place in the form of 28 workshops in 7 countries. A total of 242 persons with disabilities took part, as well as 119 therapists, teachers and caretakers.
The types and severity of disability varied widely, as did age and demographics. Disabilities included Rett Syndrome, Autism (Autism Spectral Disorder), Cerebral Palsy, Quadriplegia, Parkinson's, Alzheimer's and others. Most workshops also included 'non-disabled' participants (including, in some cases, professional dancers and musicians).
The workshops were organized through institutions for persons with disabilities and participation was free. Sessions alternated individual and collective exercises, beginning with a group warm-up lasting ca. 30 minutes. This was followed by a demonstration of the interactive system, which gave participants the chance to have an individual taste of the experience. Next, we divided into smaller groups of 3-6 persons where each participant had more time to experiment. In this part the needs of individual participants guided the workshop, which included storytelling scenarios and little performances. At the end we would bring everyone together for a finale, which was followed by a debriefing or discussion, where the focus was on the experiences of the participants of other abilities.
Some uses of MotionComposer:
source video_
Motion Composer Software
source video_
source video_
source video_