As an amateur artist I have always believed that aspects of my musical thinking are in fact visual. One of my current research topics is focused on studying how sound and image can be connected to each other through controlling processing parameters of one another.

 

Using lower level coding in the Gen environment of Max, I explore methods of transforming images in real time, through devising several software modules that transform colors, patterns, and textures as well as the overall shape of a preexisting image, resulting in various personally designed visual effects such as granulation, diffusion, erosion and deposition, dissolving and drying, sparkling, unraveling and many others. Particularly I find effective results in transforming an image based on the characteristics of another image. Alongside with processing concrete images, I also explore ideas of synthesizing images from scratch. These visual processors and synthesizers are developed both in 2- and 3-dimensions.

 

The next step in this project is to study the techniques of integrating sound and image processing and synthesis. The amplitude, pitch or the frequency content of a sound can control various processing parameters of image and video transformation such as playback parameters, scissoring and gluing, position and motion, color and brightness, mixing, as well as other more complicated processing techniques. On the other hand, quantities derived from images and videos can also affect live transformations of a concrete or instrumental sound or can control sound synthesis parameters in real time, connecting them at a deep level. Another way of linking sound and video together can be achieved through simultaneously controlling sound and image synthesis and processing through use of physical motions. All these ideas are to be explored in my compositions for instruments and live electronics along with visuals.

 

INTEGRATION OF SOUND AND IMAGE PROCESSING

Synchronicity Studio of Music

© 2016-2019 Ali Nader Esfahani