AI starts a music-making revolution and plenty of noise about ethics and royalties

Third of four parts

Pyotr Ilyich Tchaikovsky, the Russian composer known for “The Nutcracker” ballet and lush string arrangements, died 130 years ago.



Yet his influence shows up three minutes into “Vertigo,” a song that uses artificial intelligence to fuse a melody from singer-songwriter Kemi Sulola with sounds generated by NoiseBandNet, a computer model from doctoral student Adrian Barahona-Rios at the University of York in Britain and Tom Collins, associate professor of music engineering technology at the University of Miami Frost School of Music.

The model used Tchaikovsky’s “Souvenir de Florence” string sextet, ambient noises and other “training” clips to generate audio samples based on musical ideas from Ms. Sulola, resulting in a unique sonic landscape.

The song won third place at the international 2023 AI Song Contest.


SEE ALSO: Hype and hazards: Artificial intelligence is suddenly very real


“It’s a good example of technology, or innovation in technology, just kind of being a catalyst for creativity,” Mr. Collins said. “We would not have written a song with Kemi Sulola, and she wouldn’t have written one with us, were it not for this interest around AI.”

Artificial intelligence, which allows machines to receive inputs, learn and perform humanlike tasks, is making a splash in health care, education and multiple economic sectors. It also has seismic implications for music-making and record labels, posing existential questions about the meaning of creativity and whether machines are enhancing or replacing human inspiration.

Because of rapid advancements in AI technology, the web is chock-full of programs that can clone the Beatles’ John Lennon, Nirvana’s Kurt Cobain or other well-known voices. AI can spit out completed songs with a few text prompts, challenging the copyright landscape and sparking mixed emotions in listeners who are amused by new possibilities but skittish about what comes next.

“Music’s important. AI is changing that relationship. We need to navigate that carefully,” said Martin Clancy, an Ireland-based expert who has worked on chart-topping songs and is the founding chairman of the IEEE Global AI Ethics Arts Committee.

Online generators producing fully baked songs have exploded in the past year or two, alongside ChatGPT, a chatbot that can generate written pieces.

Other AI and machine-learning programs in music include “tone transfer” apps that allow a singer’s melody to come back in the form of, say, a trumpet instead of a voice.


SEE ALSO: Ex-Google engineer fired over claiming AI is sentient is now warning of doomsday scenarios


Programs to help mix and master demo tapes rely on machines to scan them and might advise on a bit more vocals here or a little less drumming there.

Even those steeped in the AI music phenomenon find it hard to keep up.

“There’s a point in each semester where I say something isn’t possible yet, and then some student finds that exact thing has been released to the public,” said Jason Palamara, an assistant professor of music technology at Indiana University, Indianapolis.

Some AI programs can fill skill gaps by allowing creators to express musical ideas fully. It’s one thing to have a rough melody or harmonic idea and another to execute it without the instrumental skills, studio time or ability to enlist an ensemble.

“That’s where I think the really exciting stuff is already happening,” said Mr. Collins, using the example of someone who wants to add a bossa nova beat to a song but needs a program to explain how because it’s not part of their musical palette. “That’s what I can do with the generative AI that I couldn’t do before.”

Other AI advances in music are geared toward fun. Suno AI’s Chirp app can spit out a song within minutes after being fed a few instructions.

“If you did all of the 10 sales points for reintroducing the ukulele to market now in North America, we’d see a correlation between the sales pitch for that and for AI music,” said Mr. Clancy, referring to the four-string instrument that gives many musicians an entry point. “It’s affordable. It’s fun. That’s the important part about these tools. Like they’re really, really good fun and they’re really easy to use.”

To underscore this point, Mr. Clancy asked Suno AI to write a song about the drafting of this article. You can listen to it here.

Creators in the fast-growing field of music generators tend to emphasize the need to democratize music-making. Loudly says its growing team is “made up of musicians, creatives and techies who deeply believe that the magic of music creation should be accessible to everyone.”

Voice cloning is another popular front in AI music production. A popular clip on the internet has Cobain singing Soundgarden’s “Black Hole Sun” instead of fellow grunge icon Chris Cornell, who recorded the original. The Beatles broke up decades ago but released a new song, “Now and Then,” using an old demo and AI to produce a clearer version of Lennon’s voice.

Voice cloning is a fun, if somewhat eerie, experiment for listeners but poses serious questions for the music industry. One record label faced a test case earlier this year when a user named “ghostwriter” uploaded a duet from rapper Drake and pop star The Weeknd titled “Heart on My Sleeve.” The issue, of course, is that neither artist was involved in the song. It was crafted with voice-cloning AI.

Universal Music Group, citing a violation of copyright law, removed the song from streaming services. The case raised questions about which aspects of the songs are controlled by the labels, the artists and the creators of AI content.

“Does Drake own the sound of his voice, or [does] just the record label he’s signed to, UMG, own the sound of his voice? Or is this an original composition that is fair use?” Rick Beato, an instrumentalist and producer, said in an AI segment on his YouTube channel. “People are not going to stop using AI. They’re going to use it more and more and more. The only question is: ‘What are the labels going to do about it, what are the artists going to do about it and what are the fans going to do about it?’”

In the Drake-Weeknd case, Universal said the “training of generative AI using our artists’ music” is a breach of copyright, but some artists are embracing AI so long as they get a cut of proceeds.

“I’ll split 50% royalties on any successful AI-generated song that uses my voice,” electronic music producer Grimes tweeted earlier this year.

The U.S. Copyright Office offered some clarity in March about works produced primarily by a machine. It said it would not register those works.

“When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship,” the guidance said. “As a result, that material is not protected by copyright and must be disclaimed in a registration application.”

The Biden administration, the European Union and other governments are rushing to catch up with AI and harness its benefits while controlling its potentially adverse societal impacts. They are also wading through copyright and other matters of law.

Even if they devise legislation now, the rules likely will not go into effect for years. The EU recently enacted a sweeping AI law, but it won’t take effect until 2025.

“That’s forever in this space, which means that all we’re left with is our ethical decision-making,” Mr. Clancy said.

For now, the AI-generated music landscape is like the Wild West. Many AI-generated songs are hokey or just not very good. A glut of AI-generated music might require a curator to filter through it all and find what’s worth listeners’ time.

Other thorny questions are whether using the voices of artists such as Cobain, who died by suicide in 1994, is in good taste and what is gained by attempting to generate AI music through him.

“If we train a model on Nirvana and then we say, ‘Give me a new track by Nirvana,’ we’re not going to get a new track from Nirvana. We’re going to get a rehash of ‘Nevermind,’ ‘In Utero’ and ‘Bleach,’” said Mr. Palamara, referring to albums the band released from 1989 through 1993. “It’s not the same thing as, like, if Kurt Cobain was alive today. What would he do? Who knows what he would do?”

At a Senate hearing in November, Mr. Beato testified that an “AI Music dataset license” is needed so listeners know how the AI platform has been trained and so copyright holders and artists can be compensated fairly after their work contributes to the piece.

Mr. Palamara worries that as AI tools become more straightforward, musicians might generally lose the ability to make music at a virtuosic level. Some singers already rely on pitch-correction technologies such as Auto-Tune.

“The new students coming in the door know how to use these technologies and never really have to strive to sing in tune, so it makes it harder to justify that they should learn how,” he said. “Some might argue that maybe this just means the ability to sing in tune is less important in today’s world, which might be true. But you can’t argue that humankind is being improved by the erosion of certain abilities we’ve been honing for centuries.”

Another concern is that machines could replace jingle writers or musicians who rely on gigs for income.

At the same time, AI is opening opportunities for musicians and arts organizations.

Lithuanian composer Mantautas Krukauskas and Latvian composer Maris Kupcs produced the first AI-generated opera for the Lithuanian capital of Vilnius in September.

Only the words for the 17th-century piece “Andromeda” survived, but the modern-day composers restored the opera using an AI system called Composer’s Assistant.

The model was developed by Martin Malandro, an associate professor of mathematics at Sam Houston State University, and can fill in melody, harmony and percussion that fit specific prompts. The European composers trained the model on the opera’s libretto and surviving music from the Baroque-era composer Marco Scacchi and his contemporaries to produce an opera that might have sounded like the original, even if it wasn’t the exact score.

Mr. Malandro said he wasn’t directly involved in the restoration but acknowledged that he is credited as the contributor to the AI model. “My understanding is that the opera was sold out and received well at its premiere,” he said.

British arts nonprofit Youth Music conducted a survey that found 63% of people ages 16-24 say they are embracing AI to assist in their creative processes, though interest wanes with age. Only 19% of those 55 and older said they would be likely to use it.

Mr. Palamara said mixing and mastering are ripe for AI use. He took some of the “awful” demos that his high school band made in the 1990s and ran them through a program from IzoTope that analyzed the demos and found ways to improve them.

Experts say programs can also take over some grunt work for music professionals who want to focus on one project but let AI assist with the assignments they need to pay the bills and meet tight deadlines.

AI is “definitely going to change our musicianship,” said Mr. Collins. “But I think change in musicianship has been happening for centuries.”

Source: WT