Microtonal sampler and synthesizer based on Iranian maqams


Following a study of Iranian traditional music, I devised a sampler based on Iranian modes in MaxMSP. My aim was to reflect the specific microtones of this music onto my recordings of Iranian soundscapes and sound objects. This microtonal sampler has 16 different pitches in an octave which come from the 16 frets on Iranian plucked string instruments, and is based on just-intonation pitches:








As it is apparent in the table of frequency ratios, there is an interesting ratio, 12/11, which in combination with other frequency ratios can, to a great extent, represent Iranian microtonal intervals. I have made use of this ‘microtonal sampler’ and its counterpart ‘microtonal synthesizer’ in my three short compositions, desert, oasis, and caravan, as well as in my dissertation composition, Unremembered Soundscapes.


The audio processing Max patches of Unremembered Soundscapes that incorporate this microtonal sampler can be found here:



The next step in the development of the microtonal synthesizer involves a study of frequency modulation synthesis where just-intonation microtonal frequency ratios serve as the harmonicity ratio of the synthesizer.



Explorations in using motion capture devices in live electronic music


In my recent compositions, I have been exploring the potentials of motion capture devices like Kinect in the performance practice of live electronic music. Specifically, I was curious to see how the computer performer’s physical gestures and motions could be interpreted musically and expressively.


In Unremembered Soundscapes, I explored the idea of triggering samples and controlling various processing parameters with physical motions inspired by actions involved in Iranian traditional professions. For example in the first movement ‘a tapestry of glass’ the scissors motion in carpet weaving, with each hand as one of the two blades, controls a filtering of the sonic material; or in the third movement ‘a fluff of smile’, shaping an imaginary felt hat with index fingers, controls glissandi effects applied to the live instrumental sounds.


I also investigated into the idea of triggering sound grains with continuous and directional gestures. For example in the forth movement ‘a breath of tea’ the motion of pouring tea into imaginary teacups triggers a flow of ‘pouring tea’ sound grains; or in the last movement ‘dust of aged hands’ pounding on an imaginary anvil in slow motion triggers deep reverberant samples of coppersmithing.


This is an area that I continue to explore as I find triggering, shaping, and spatializing sonic material with physical motions very expressive and natural. In my previous projects, I have primarily studied the use of motion capture devices by the computer performers. Currently, I pursue this idea by examining the possibilities for acoustic instrumentalists to control electronic extensions to their instruments using physical motions natural to their practice. This includes examining the capabilities of using smaller scale gestures such as finger motions in controlling the electronics through use of controllers such as Leap Motion. I am also curious to see if sensors such as accelerometers, pressure, bend, and tension sensors can be used to transform the human body into a musical instrument. This is studied in the following project.



Gestures of music, sound of motions: art and science of listening to movements


This project will focus on the development of innovative methods of motion sonification applied to choreographed movements of contemporary dance, and will try to address two intertwined questions, one artistic and one scientific. Can intricate motions of the human body generate, shape, and transform sound in expressive, ergonomic, and musical ways? Can sound be used as a guide to refine complex physical activities and help performers to get closer to an optimal motion model?


For the artistic component, I would like to compose a cycle of musical sound compositions for dance and live electronics, ‘Ringing Elbows, Roaring Knees’. In the performance situation, choreographed motions of the dancer are picked up by a series of sensors attached to the body and are used to generate and manipulate sound through controlling diverse sound processing and synthesis parameters in a piece of audio software designed and programmed by the composer. These sounds can then be projected through a set of loudspeakers around the audience. The scientific part involves a study of the refinement of motor skills through the use of sound feedback. Motion patterns performed by a dancer are sonified and compared to pre-recorded motion models. The dancer listens to the sound of his or her motions in real-time and refines the motions in accordance with the sonic response they receive.


This project was proposed in collaboration with Prof. Jeffrey Boyd from the Computational Media Design Program, University of Calgary.



The search for new modes of expression: electronic enhancements and the performance breach between acoustic instrumental performers and composers


Together with Dr. Jeremy Brown, Dr. David Eagle, and Dr. Laurie Radford, we proposed to work on a project that involves developing a curriculum for training wind players to understand, design, and utilize live electronic tools as extensions to their acoustic instruments. So far, interactive electronic techniques have been almost exclusively used by electronic music composers. One of the purposes of this project is to introduce and cherish a performer-centric view of this medium through familiarizing student performers with this new dimension of their instruments and initiating an interest in exploring its potential.


A main part of this project is devoted to studying the effectiveness of various pedagogical approaches to teaching the electroacoustic medium to musicians and particularly wind players. This curriculum is divided into several phases each focusing on one of the dimensions of this knowledge, including theoretical, practical, and creative. The impacts and outcomes of these different phases of study will be assessed through interviews, performances, exams and surveys that are done before, during and after each phase.


Another major aspect of this project is focused on investigating the effectiveness and expressivity of different methods of controlling the electronic effects for wind instruments. These methods include use of wind and lip sensors, physical motions and gestures tracked by motion capture devices, continuous pedal controllers and triggers, or even brainwave sensors. In a search for idiomatic, practical and ergonomic solutions, these tools and techniques will be developed, tried out, and assessed.


By training a cohort of undergraduate students in the use of a core set of live electronic competencies in a program of performing and collaboration, we can for the first time bring about a change in the perception of performers who tend to believe that the complexity of the electronic media is insurmountable and create a cadre of advanced performers with a high level of creative ability on electronic extensions.



© 2016-2019 Ali Nader Esfahani