With the focus on music and technology this week in the runup to the AlgoRhythms popup summit, we are thrilled to talk with Professor John Gibson, director of the Jacobs School of Music’s Center for Electronic and Computer Music (CECM) about its origins and how the world of AI will impact its creative operations. The CECM, led by both John Gibson and Chi Wang, Assistant Professor, has been at the forefront of computer music since its founding in the early 1980s and continues to flourish as one of the most innovative centers of its kind.
Primary Photo: Marielle Hug
How does the CECM fit into Jacobs School of Music and are there examples of major success you can share?
As Director of the Center for Electronic and Computer Music (CECM) in the composition department at Jacobs, I organize our curriculum and concert series and work with our Associate Director, Prof. Chi Wang, to set the tone for our community of students making music with computers.
Our students are active at festivals and conferences, as are we. Just this past month, two composition students, Yao Hsiao and Huan Sun, were among the four finalists for the national commission competition of the Society for Electro-Acoustic Music in the United States (SEAMUS). Their pieces were included at SEAMUS-sponsored events in New York City and at Ohio University. Other students appearing at one or more peer-reviewed festivals this month are Xinyuan Deng, Eunji Lee, Anne Liao, Chloe Liu (recent alum), Alexey Logunov, and Dmitri Volkov. Anne Liao, who is completing the CECM Master’s degree in Computer Music Composition, is just back from presenting her work in Paris at IRCAM, the world-renowned center for computer music composition and research.
Prof. Wang and I alternate teaching our Projects course, in which students create music for our end-of-semester concert in Auer Hall (April 21). My specialties are composition for acoustic instruments and electronics, fixed-media multichannel audio, and software development. Prof. Wang’s main focus is composition and performance for data-driven instruments, for which she builds gestural controllers and designs software. Our work has been recognized internationally.
Computer music has been around for decades and yet, in the past few years, it seems that the connection between technology and music is having an outsized influence on the way we think about, explore, and create music. Can you take us through the hallmarks of what has made innovation in computer music so important through the decades?
What we do in computer music now relies on decades of work before the advent of computers. In 1944, Halim El-Dabh went into the streets of Cairo with a wire recorder and captured the sounds he heard. This resulted in the earliest example of what was later called musique concrète, or music shaped from recorded sounds of the world around us. The person who coined this term was Pierre Schaeffer, who worked at the French radio and produced in 1948 the influential piece, Étude aux chemins de fer, or Railway Study. He went out with a disc recorder and recorded the movements of six locomotives in a Paris train station. He returned to the studio and constructed a musical dialogue between these sounds, creating tape loops, changing speed, cutting and splicing tape — all techniques that we can accomplish more easily in a computer and that show up today in popular music and electronic music stemming from the European classical tradition.
In Murray Hill, New Jersey, researchers working at Bell Telephone Laboratories invented the transistor, the UNIX computer operating system, the C programming language, the CCD image sensor used in digital cameras, many communications protocols — and most importantly, computer music languages! When they weren’t working on the boring stuff, researchers such as Max Mathews spent their time figuring out how to generate sound using a computer and convert it from digital to analog form, so that they could hear it over a loudspeaker. If you are an EDM producer using a synthesizer program, you are relying on Mathews’s invention of wavetable synthesis in 1958. Any of the applications we use to make sound today have a common ancestor in the research done at Bell Labs.
Musique concrète and the Bell Labs innovations were among the efforts that sparked a revolution in the way we think of the musical potential of sounds of any sort, not just instrumental or vocal sounds. This opened music to concerns beyond the pitch-time basis of much Western music to encompass specific timbral (or tone-color) resources, such as the continuum between clear pitch and noise, and the simulation of three-dimensional space in music. I would argue that composers who learn to hear these aspects of electronic music often begin to write acoustic music in a different way. For example, the celebrated Finnish composer Kaija Saariaho created unique instrumental textures influenced by her earlier work in computer music at IRCAM, the French research center. Something similar has happened to hip-hop breakbeats produced using samplers and drum machines in the 1980s. Now there are drummers, such as the incomparable Jojo Mayer, who play breakbeats on real drums that blend the electronic qualities of the original productions with, in the case of Mayer, a refined jazz instrumental technique. But the sound ideas come from music developed for an electronic medium. To my mind the most profound effect of electronic and computer music on music-makers is the change in hearing and musical imagination it induces.
Although much of the earliest, and plenty of the current, electronic music repertoire is in the form of fixed media, for playback over loudspeakers in a concert hall, musicians since at least the 1960s have wanted to perform and even improvise electronic music before an audience. This has led to the development of bespoke instruments that combine gesture-sensing hardware with software that makes sound when you move your hands and fingers. An example from the popular music world: the glove sensors designed for singer/songwriter/producer Imogen Heap by the MIT Media Lab. At the CECM, Prof. Chi Wang invented a gestural controller she calls Yuan, a tambourine-shaped device that can sense its orientation and the touch of the performer. She fabricated several Yuan for use in her piece, Dynamic Equalibrium, a commission from the IU NOTUS contemporary vocal ensemble. For her work with Yuan, Prof. Wang was selected as a finalist for the international Guthman Musical Instrument Competition at Georgia Tech, where she performed her own music and improvised with a local percussionist.
I imagine the future of music technology will include increasingly sophisticated and natural ways to control sound in performance, as well as continued exploration of sound creation methods.
PHOTO: Anne Liao performs with a custom light controller.
With AI and other algorithmic tech on the rise, what excites you about changes on the horizon?
There are two applications of machine learning we are watching and experimenting with in the CECM: enhancing gestural control and exploring new sound-generation possibilities. This is going to get a little technical, so please bear with me!
Two ideas add to our set of techniques for guiding sound generation by performance gestures. One is called dimension reduction. Say you have a two-dimensional control surface, such as a tablet, and you want to control dozens of synthesizer parameters (or dimensions) by drawing on the tablet. Using machine learning, we can associate several points on the tablet with specific sounds (combinations of parameters), and the system can then construct meaningful parameter combinations for any other point on the tablet. This makes it possible to create an interface to the synthesizer that a musician can explore musically, discovering its fascinating nooks and crannies by drawing on the tablet. The musician must learn to play this new instrument, just as they would any conventional instrument. Machine learning can also perform gesture recognition, so that when you make a particular sweeping motion with your hand, the computer could play a specific sound.
There are several methods that allow us to feed a computer many audio recordings and use them in novel ways. One such technique is called audio mosaicing. Say you have a recording of a conversation between two people. We can break this stream of audio into small segments of sound and analyze each segment for its pitch, loudness, brightness, noisiness, and other properties. Then we take a second collection of recordings —musical instruments, singing voices, animal sounds, anything! — and subject it to the same analysis. Finally, we look for the closest match of each segment in the conversation to one in the second collection of sounds. We simply substitute the matches for the original conversation segments. So, we could hear what the conversation sounds like if the speakers were instruments or animals instead. CECM alum Felipe Tovar-Henao has written software that performs this miracle in a way that is accessible to computer musicians.
An AI-driven cousin to mosaicing is a timbre synthesis platform called RAVE, the Realtime Audio Variational autoEncoder, created by Antoine Caillon. The system requires you to train it on several hours of audio, but then you can play it audio it has never heard, and it will produce unusual combinations of the two sources of audio. It can sound a little like a deranged auctioneer, but it’s a lot of fun! The German composer Alexander Schubert has used this technology in his groundbreaking theatrical pieces Convergence and Anima. CECM composer Isaac Smith has used RAVE in his composition Extension for solo flute and electronics, training it on content provided by Robin Meiksins, the IU alum for whom he wrote the piece. Recent CECM graduate Alexander Toth has also been working with timbre synthesis using AI for his compositions.
We are looking at machine learning classification techniques for timbre recognition. This will let us program the computer to recognize when a violin is playing pizzicato, bowing close to the bridge, or bowing normally, so that we could then process the violin sound differently depending on the type of playing.
All these techniques help us discover new creative approaches to composing and performing our music. We are not particularly interested in turning over our creative visions to AI or any other technique, but we are open to new sonic ideas AI makes available.
PHOTO: Chloe Liu and Katelyn Connor in performance. Photo: Noam Niv
Do you have any fears about AI taking over?
Well, I worry about power shifting even more to tech companies and their leaders. I worry about AI-facilitated disinformation and deepfakes contributing to the crumbling of our civic culture. I worry about this making it harder for people to learn how to identify legitimate sources of information. And I worry about innocent people becoming the targets of scams and bullying made possible by AI tools. All of this is happening already.
I might be naive about possible dangers AI poses to musicians. There certainly will be increasingly negative impacts on people who create music to sell as background for YouTubers, for example, and possibly music for some other forms of commercial visual media, as AI could let a company generate lots of plausible content easily (though not without opening a legal can of worms around the data used to train the AI). This parallels the job-losses already felt by voice-over artists due to realistic AI-driven speech synthesis.
But I have hope, because artistic expression truly is the domain where we are most human and least replaceable by machine simulation. That applies to musical composition, performance, and improvisation. People want to engage in these activities; they don’t want to relegate them to machines. People go to concerts because they want to witness other humans creating music before their ears, not robots performing for them.
As powerful as our current crop of generative AI tools are, they are extensive pattern-matching machines based on an enormous pile of data harvested from an array of human sources (often without permission). Unless you believe that we humans are also merely extensive pattern-matching machines, you can have faith that musical expression will remain firmly in the hands and voices of humans.
PHOTO: Yao Hsiao in performance.
Creating computer music seems particularly interdisciplinary because it involves informatics, audio manipulation, and often choreography. Is there a sign of things to come in the music tech space that you think we’re seeing today?
Computer music certainly can be a highly interdisciplinary practice, reliant as it is both on advanced technology and innovative artistry. In our own work at the CECM, we have had collaborations with dancers and with Luddy School engineers, in addition to frequent collaborations with the wonderful student performers of Jacobs, some of whom go out of their way to try unusual things, such as stomping on wooden platforms as part of the performance.
But there is another aspect of our work that sometimes points in the opposite direction. Composers write software to generate sound and facilitate the performance of their pieces. Some are performing their own work, often using gestural controllers, and they sometimes make videos and design lighting to accompany their music. Some have built their own gestural controllers. We have even created robotic “drummers” to beat on woodblocks, cymbals, and frying pans under the control of the composer’s software. So as valuable as collaboration is, students often benefit from trying to do things themselves that they don’t normally do. I see this dynamic of multidisciplinary collaboration versus jack-of-all-trades DIY activity continuing to play out, at least in our experimental electronic music world.
Computer music has been the nearly exclusive preserve of white males for a long time. Thankfully, that has been changing, and the CECM is fortunate to include many women who are composing and performing, including at festivals in the US and abroad. It is gratifying to see this, and I expect the trend to continue.
PHOTO: Oliver Kwapis performs Talk-back.
What are some of the most inspiring projects you’ve seen from your students in the past few years
Our students constantly surprise and inspire me. Their best work happens when they’re invested in pulling everything together for a concert. I hesitate to single out pieces — there are just too many great ones to enumerate here. But one of them got me thinking about a possible benefit of this way of making music.
Oliver Kwapis, a recent IU Composition Master’s graduate, who is now in the Data-driven Music Performance & Composition PhD program at the University of Oregon, created and performed a piece of music called Talk-back on one of our CECM concerts. Oliver has suffered from severe performance anxiety in the past. His piece is an enactment of a minor panic attack: it includes a flurry of recordings of his voice expressing anxious thoughts and drawn-out versions of his “talk-backs,” or positive statements meant to calm himself. He controls all these sounds using a Gametrak — a device comprising two joysticks, each with a retractable tether that a performer can pull from the floor to well above their head when standing. Oliver seems to pull the sounds from his body as he nervously stretches the tethers to the ceiling. This visceral performance, with its sudden eruptions of sound, captures a sense of anxiety and uncertainty, but also the hope for healing.
Oliver writes of his interest in this kind of gestural performance: “As I realized that I wanted this type of performance to play a bigger role in my life, my performance anxiety didn’t wane but began to be matched by excitement. Conversely, the excitement gave me renewed energy to face my anxiety head-on. I decided to do so, in part, through my music — through performing.”
Since then, I have seen other students take on projects that let them confront personal issues in an artistic format. It seems that gestural performance of computer music, with its possibilities for inclusion of sound from outside the concert hall, enables composers to achieve a form of direct, personal musical expression that they regard as deeply fulfilling.
Leave a Reply