Since the beginning of musical performance, artists have pushed the boundaries of its capabilities. From “plugging in” guitars to digital DJing, such innovations in technology and performance are the lifeblood of musical evolution.
These days, this space lies at the intersection between music, performance, artificial intelligence and machine learning. Such technology is able to surge the possibilities of music production and performance forward in unexpected ways, possibly even creating new genres and definitions along the way. Utilizing artificial intelligence allows for automated systems, increased pattern detection, and the ability to compose and perform music in ways humans simply cannot, changing the very relationship between the organic and inorganic in the process.
Many of the artists, musicians, and documentaries featured at IN-EDIT cover some of history’s most forward-thinking names, many of whom have been integral in creating the landscape of music as we know it today. Here, we look at five of modern music’s most futurist minded artists who are integrating artificial intelligence and machine learning into their compositions and performances.
A pioneer of electronic music, Actress (aka Darren Jordan Cunningham) has produced some of the most forward-thinking music in recent memory. Since his debut release ‘Splazsh’, Actress has fused elements from Garage to Dubstep, Detroit Techno to House mixed together with the darkest of bass. Always at the forefront of musical innovation, Actress currently tours with Young Paint – a 100% AI character. After a year of ingesting Actress’ sounds, Young Paint is able to take the stage as a life-size projection, creating a unique man/machine digital duet.
View this post on Instagram
For 2 years my music will be playing in the lobby of the newly reopened @themuseumofmodernart . It’s such an honor to be able to compose for an AI that will never make the music play the same way twice – it’s a live transmission forever in mutation— check out the brilliant https://bronze.ai for more info ——— Durante 2 años mi música va a sonar en el lobby del @themuseumofmodernart q ha reabierto sus puertas nuevamente. Es un honor haber podido componer para una inteligencia artificial que nunca repetirá mis composiciones de la misma manera- es un flujo siempre cambiante. Investiguen sobre el brillante proyecto https://bronze.ai for more info ——— Thanks to the Parreno studio for trusting me and giving us carte blanche throughout the composition process, it’s amazing to think of how many people that don’t know about my work will have my sounds passing through their body as they pass through the space into the MoMA. Thanks to everyone involved especially @damienqd ! It’s an honor 💖 The piece is titled Echo (Danny the Street) by Philippe Parreno who was commissioned to produce the work and he asked me to create this entity together with him. I was happy to suggest the word Echo based on the myth of Narcissus and Echo— look into how it unfolds, it’s one of my favorite myths! Thanks Philippe! @themuseumofmodernart
In October of 2019, Arca soundtracked New York City’s Museum of Modern Art’s lobby with an AI Produced track to be played across the next two years. After having been asked to create a piece transforming the museum’s lobby into a “real public space”, Arca utilized AI software Bronze to create a track that “will never play the same way twice”. Part of Phillipe Parano’s Echo (Danny the Street), the enigmatic musician’s contribution acts as a “live transmission forever in mutation”.
This Berlin-based artist Holly Herndon has been pushing the boundaries of technologically based music since her Stanford University’s Center for Computer Research in Music and Acoustics Ph.D. studies. Often using her own voice as a primary input, with instrumentation created through visual programming language Max/MSP, Herndon is widely considered the preeminent digital musician utilizing Artificial Intelligence in her craft. Her 2019 album PROTO took this to another level with the addition of a fully AI vocal collaborator ‘Spawn’. ‘Spawn’, co-created by Herndon and Mat Dryhurst, is the result of hundreds of hundreds of vocalists, brought together to teach it how to identify and interpret sounds, ultimately “raising” the AI to interact with organic beings on stage.
Toro y Moi
Launched last year in collaboration with personalized sound environment company Endel, Toro Y Moi released a 4 track EP of AI-generated soundscapes. The tracks, ‘Flow’, ‘Move’, ‘Balance’ and Connect’ make up the ‘Samrtbeats’ EP, which is meant to “offer an enhanced wellness experience” through mixing Tory Y Moi’s music with Endel’s algorithm, resulting in “scientifically engineered sounds that react to your current state and needs”.
Created by Ash Koosha and Isabella Winthrop, YONA is an “auxiliary human” – a person driven by artificial intelligence and digital technology. Its co-creator Ash Koosha has also been a primary figure in the landscape of technologically influenced music for some time. The British-Iranian composer introduced the first virtual reality album in 2015 and performed in VR at the London Institute of Contemporary Art. With his new album featuring YONA, the CGI-enhanced artist’s lyrics, chords, and voice are all results of generative software. In the live space, YONA exists as a 3D hologram synthesizing the ideas of human producers and songwriters into her own distinct style.
Featured Image taken from Actress/Young Paint (Live AI/AV)