The conventional way of producing sound with musical instruments implies some degree of interaction between the human agent and the instrument, mostly present in the struggle to control the forces and reactions, one that is informed by an embodied knowledge of the instruments' particular resonances and idiosyncrasies. The duality of gesture-as-movement and gesture-as-intention is therefore clearly presented in the process that transforms a physical performance gesture into the conveyed musical or sonic gesture.
Digital instruments and systems are often used to further expand the reach of these gestures, taking them beyond the physical capabilities of the performer. Traditional approaches favoured pre-conceived prosthetic gestures that were triggered during live performance, whereas the current paradigm is mostly based on the digital expansion of the regular capabilities of acoustic instruments. Whilst this often implies a decoupling of the aforementioned duality — favouring the musical or sonic gesture over the physical performance —, performers can still embody this expansion in order to exert and display a given degree of control over the musical outcome.
But what happens when the interactive capabilities of a digital system increase and the performer can no longer reasonably expect to fully control the outcome of his or her actions? Is the expressivity of the system compromised or are we facing a new kind of expressive potential? Who are the agents behind that expression? How can interactive musical systems expand our current notions of musical expressivity and musical agency? What does that bring for the composer, the sound designer, the performer or the audience? These are some of the questions that will be addressed in this talk, based on the implications that different concepts of interactivity have in our understanding of interactive music.