YOUR FAVORITE MTV SHOWS ARE ON PARAMOUNT+

Can AI Make Musicians More Creative?

Google and Sony want to change the way artists think about artificial intelligence

Late last year, a team of Sony researchers based in Paris released a pair of new pop songs. One, called "Daddy's Car," straightforwardly echoed the soft '60s psychedelia of The Beatles; the other, "Mr. Shadow," was an electro-ish update on classic jazz à la Duke Ellington or Cole Porter. The songs were just fine (if that), from a critical standpoint. What made them major events was the fact that they were composed using artificial intelligence, specifically using the Flow Machines software developed by Sony's Computer Science Laboratory. Computers, not humans, had composed the melodies of the songs, pulling cues from a database of more than 10,000 diverse lead sheets.

Once the algorithm spit out a melody, human composer Benoît Carré stepped in to pen lyrics and produce the finished tracks. But that didn't stop music fans from wondering if their dear pop idols were soon to be replaced by a gaggle of Dolores Abernathy clones making future house (or whatever the kids are listening to these days). "This AI-written pop song is almost certainly a dire warning for humanity," read a headline at The Verge. "It looks like robots are one step closer to taking over the world," wrote Complex.

Life in 2017 does feel a little like The Jetsons, as people install digital assistants named Alexa in their homes, buy "smart fridges" that tell you what you're missing in your kitchen, and welcome self-driving cars into existence. And while it's easy to see how computers can help you buy a t-shirt through voice command or calculate how many calories you consume in a meal, when it comes to AI and art, the lines are blurrier. Many remain skeptical of a machine's ability to create something that rivals or replaces human creativity. Programs like Sony's Flow Machines and Google's Magenta project change the terms of this debate by putting real, live creators front and center, emphasizing the potential for post-human collaboration. If Siri is your personal assistant, the theory goes, an artificial-intelligence routine might be your new bandmate.

AI-composed music is not a new phenomenon by any means. The Illiac Suite, a string-quartet piece that American composers Lejaren Hiller and Leonard Isaacson created in 1957 using an early computer at the University of Illinois, is widely considered the first computer-assisted composition. In the '60s, Italian composer Pietro Grossi put down his cello and turned to computers because he dreamed of making music without musicians. Brian Eno coined the term "generative music," for music that improvises itself using a computer, while using the program SSEYO Koan in 1995. Since then Eno has continued making generative music, using "mutation software" created by developer Peter Chilvers for his most recent album, last month's Reflection.

One problem, says composer and AI researcher David Cope, is that people throw the phrase "artificial intelligence" around too often. "We have 'artificial intelligence' running cars so we don’t have to drive them, we have it in blenders in our kitchens, we have it in our television sets — but the fact is we don’t have any of that," he says. "Our machines are still doing what they always did, and that’s what we tell them to do. When you hear someone say my work is 'computer composed,' it isn’t — it’s computer assisted."

Cope began experimenting with computer-generated music while teaching at Ohio's Miami University in 1975. At that time, programming had no visual output, aside from punch cards. "It took at least two or three weeks to produce a very short piece, which, by the way, was hideous," he says. It wasn't until the mid-'80s that Cope had his breakthrough. Struck with writer's block while working on an opera, he set out to make software that would help him compose.

The result was EMI (Experiments in Musical Intelligence), or "Emmy," a computer program that created music in the styles of other classical composers. Emmy eventually helped Cope complete his opera "Cradle Falling," as well as produced convincing riffs on artists like Bach and Mozart, which Cope compiled and released as an album in 1997. But Emmy wasn't nearly as collaborative as his next bot, which he dubbed "Emily Howell." Like the French bots that co-wrote "Daddy's Car," Emily Howell composes from a database of digital sheet music — but she can actually respond and then change her work when Cope says yes or no to fragments of music, a method he calls "the carrot approach." "You end up learning how to talk — mostly typing in, and then putting in — little pieces of music, and it composes," Cope says. "I say yes or no, and it slowly learns what I like."

He laughs and adds: "Emily’s job in life is to please me, which is kind of nice."

Most contemporary music AIs follow this approach. While exceptions exist — Jukedeck uses algorithms to automatically generate copyright-free music that can be used in short films or DIY videos, with intense collaboration a secondary concern — researchers today are generally aiming to make tools that are accessible, easily manipulated, and friendly to individual creativity. The goal isn't for computers to replace composers, but rather to make these computer programs so helpful and intelligent that composers want to make music with them.

Google's Magenta uses TensorFlow — the same open-source machine-learning software that helps the tech giant reverse-image-search photos, translate languages, and suggest search terms — to make art. Google research scientist Douglas Eck describes a process similar to Cope's carrot approach called "reinforcement learning," in which a computer is exposed to a wide range of musical styles. The computer then does its best to replicate the kind of song you want (country-pop tune? '70s hard-rock jam?), and you tell it how good or bad a job it's doing.

The most interesting results, Eck says, come when his team brings in a human musician to take the wheel. "They can say, 'This is what I want from my music, I know how to write this with my own rules and feedback,'" Eck says. "It enables artists to directly control the flavor of the output."

Sony's Flow Machines Project, meanwhile, takes its name from psychologist Mihalyi Csikszentmihalyi's theory of flow, which defines "flow" as your most creative, energized state of mind for making work. And while pieces like "Daddy's Car," which was billed as an AI-composed song, certainly grab people's attention, Flow Machines communications officer Fiammetta Ghedini says that's only part of their work. "Our goal is to create tools that are conceived and designed for collaboration," she says over Skype from Paris. "[The program] can autonomously generate, but then it’s less interesting — because it’s true that art is a human activity."

Flow Machines is especially concerned with computer programs getting a grip on style, whether it's that of a canonical artist or of a musician who's just beginning to work with the program. Using their program FlowComposer, artists can pick a pre-memorized style and use it to help build out melody and harmony parts for their song. "It's as if you knew by heart all the Beatles songs, or some Brazilian guitarist's songs," Ghedini says.

The Sony Lab also aspires to dramatically expand the abilities of the traditional looping pedal through a program they call the Reflexive Looper. This module identifies the notes that a musician is playing in real time, recognizes the style in which they're being played, and generates new notes that fit right in. "This could become an instrument for playing at home every day," Sony assistant researcher Marco Marchini says. "It could continuously learn from the feedback of the musician, so that it could become more and more intelligent."

But will musicians actually want to work with AI? The relationship between artists and their computers inspired U.K. electronic producer Scratcha DVA, also known as Leon Smart, on his 2016 record NOTU_URONLINEU. The album, released on Hyperdub, was an exploration into the suffocating and intimate connection people have with their computers, with Smart using dark, sample-heavy production to underline what our digital lives sound like. But Smart isn't totally comfortable with the idea of AI-assisted musical collaboration, especially when it comes to the emphasis on AI's efficient stylistic mimicry.

"That's cool, but I'd really want AI to further what we do, not just ... do what we do," he tells me over Skype. Much as past artists used synthesizers and MIDI to expand their music making, Smart suggests that today's artists consider using AI not to sound like other musicians, but instead to create something new. "Me and the AI might be able to be Stevie Wonder in 24 hours, but how is that furthering the sound of music?" Smart says. "It's a bit depressing."

That said, Smart acknowledges the possible commercial value in AI-assisted carbon copies. "It might be good for major record labels or daytime radio," he adds with a laugh. "Like if Tinie Tempah comes out with a record and everyone wants a new Tinie Tempah ... me, personally, I don't want another Tinie Tempah! I want the next thing."

Québécois electronic artist Caila Thompson-Hannant, who performs as Mozart's Sister, is more open to the idea of collaborating with AI. "The question for me is, how elastic can these tools be so I can really mess with them?" she says. Thompson-Hannant, who works primarily with the music-production software Ableton, notes that artists will find interesting ways to use any piece of technology, whether or not those uses are intended by manufacturers. "When you really dig into a program, you're going to find things you didn't expect," she says. "That's what I hope remains in this process: Whether you're working with AI program or digital sounds or your voice or whatever instrument, you will find a way to use it that's unexpected."

When David Cope released some of Emmy's first classical compositions in the 1990s, critics were "extremely antagonistic," he says. "Most of the stuff had to do with computer music not having a soul. I would open up a page of sheet music and say, "OK, where is the soul in all these black dots and black lines?" Of course they couldn't find it and said, "Well, it’s the performer's soul." In that case I say, "Well, my computer can be played by living human beings, so why does it not have soul?"

While complex software has become a commonplace part of daily life around the world, the idea of creating music with the use of AI or algorithms is still an outlier. Sony's Ghedini, noting that photography was once an exotic new technology, thinks the only way to get people to see the creative potential of AI is to perfect the final products. "I think that if we compose and if artists compose catchy and beautiful music, then that’s the most important thing, and then it doesn’t matter how it is done," she says. "I think we have to work to convince people and the public on the basis of the results of the content, not on philosophical ideas."

Thompson-Hannant tends to agree. "I'm sure one day we'll all hear an AI song and we'll be like, damn, that's really good," she says, laughing. "It'll be like a 'Call Me Maybe' thing. Just kind of weird enough but still hitting a sweet spot. And then it will be all over the radio and totally popular."

Latest News