POSTER PRESENTATIONS BY STUDENTS!
Saturday, March 30 | Auer Hall Green Room
Link to Summit Page >
KYLE BROOKS
Generative Interaction: Facilitating Spontaneous Musical Accompaniment using MuBu for Max
This installation showcases the capabilities of real-time sound processing using MuBu for Max, creating accompaniment to acoustic or data-driven sounds in real time. Using segmented audio data with descriptors and annotations, as well as aligned audio data, samples processed by MuBu are used to generate sounds that accompany the sounds it processes.
Bio: Kyle Brooks is a saxophonist, composer, and educator currently pursuing a doctorate in Jazz Performance at the Jacobs School of Music. With roots tracing back to the bustling music scene in Chicago, Brooks is well-traveled and experienced in the fine arts, working in different musical entities that range from Jazz, Rock, Salsa, Bollywood, Folk, Electronic Music, and more. These facets, combined with Brooks’ passion for education, provide him with a wide-ranging connection with people of different walks, which then translate into the very musical personality he encompasses.
KAITLIN PET
Advances in Score Following
Score following technology allows automated tracking of a performer’s position within a musical score. Currently, most score followers use audio alone and expect the performer to play a piece straight through from beginning to end without pause. We show our recent research efforts that 1) allow us to track a musician during a “practice session” who pauses often or jumps around in the score, and 2) how computer vision-based hand recognition can be incorporated into score following to increase robustness and include physical aspects of music-making.
Bio: Kaitlin Pet is a PhD student studying Music Informatics at the IU Luddy School. She works with Professor Christopher Raphael to study the combination of music performance and AI.
ISAAC SMITH
“Extension”, for solo flute and AI electronics
“Extension” is a work for solo flute and electronics, written for flutist Robin Meiksins, which employs the nn~ object, an AI sound synthesis module developed by IRCAM. I created and trained my own model using a dataset of audio taken from Robin’s YouTube channel. When loaded into the nn~ object, it analyzes any input sound, breaks it down, and reconstructs it using small pieces of the audio dataset of flute music. In the piece, Robin performs live while her sound is processed, analyzed, and resynthesized by the nn~ object to create an imperfect AI “echo” of her live sound.
Bio: Isaac Smith is a DM student in Music Composition, with doctoral minors in Electronic Music and Music Theory. He received his Master’s in Music Composition from the University of Oregon in 2019. He has presented on music research involving AI multiple times at conferences and performances, and is a member of the coordinating committee for the AlgoRhythms summit.
XANDER TOTH
Sound Plasma and AI: Neural Network Timbre Synthesis and Hallucination
Exploring recent ideas in musical form, composer Xander Toth utilizes neural network timbre synthesis to morph between acoustic and synthetic sound sources in an unbound, fluctuating soundscape. Simultaneously, AI lends context to this sonic flux in a series of “hallucinative” classifications by playing back real-world audio in response to its analysis of the flux, facilitating a dialogue between abstract and contextualized audio and calling into question our own (human) imaginative associations with sound.
Bio: Xander Toth is a composer and computer music artist residing in Bloomington, IN. He received his Master’s in Composition and Computer Music Composition from the Indiana University Jacobs School of Music, where he currently teaches introductory computer music. His compositional interests include generative music, new interfaces in musical expression, and developments in artificial intelligence.
DMITRI VOLKOV
Malt Transformer
An implementation of the GPT architecture in the Racket programming language, using Malt. This implementation is designed to serve as a next step for people learning about neural networks from the book The Little Learner.
Bio: Dmitri Volkov primarily operates within the realms of music and computer science, both of which he currently studies at IU Bloomington, within the Jacobs School of Music and the Luddy School of Informatics, Computing, and Engineering. Dmitri is self-taught in C++/JUCE and Python, and has published several apps to the App and Google Play stores; he is currently developing Pivotuner, an audio plugin which enables adaptive pure intonation on keyboard instruments. Dmitri also wants you to know that he does not usually write in the third person.