Madrid 5 (European Press)
Researchers at the University of Waterloo have developed LyricJam, an advanced computer system that can create lyrics for live instrument music.
This system, which was presented in a paper that will be presented at the International Conference on Computational Creativity and previously published on arXiv, can help artists compose new lyrics that fit well with the music they create.
“I’ve always had a great love for music and an interest in learning about the creative processes behind some of my favorite songs,” Olga Vishtomova, one of the researchers who conducted the study, told TechXplore. “This led me to research music and lyrics and how machine learning can be used to design tools that will inspire music artists.”
Vishtomova and her colleagues have been conducting research focused on message creation for several years. Initially, they developed a system that could learn certain characteristics or aspects of an artist’s lyrical style, by analyzing the audio recordings of their songs and lyrics that they had composed in the past. This system then uses the information collected in its analyzes to create lyrics that align with a particular artist’s style.
Recently, researchers have also begun to investigate the possibility of creating lyrics for audio clips of recorded instrumental music. In their new studio, they tried to advance the development of a system that could create lyrics suitable for live music.
“The aim of this research was to design a system that can produce words that reflect moods and emotions that are expressed through different aspects of music, such as strings, musical instruments, percussion, etc.,” Vishtomova said. “We set out to create a tool that musicians could use to inspire themselves to write their own songs.”
Essentially, Vechtomova and her colleagues set out to create a system that could process raw live music performed by an individual musician or ensemble and generate lyrics that match the emotions the music expresses. Artists can then review and draw inspiration or adaptation of these created lyrics, thus discovering new interesting themes or lyrical ideas they hadn’t thought of before.
“The scenario we imagine is an artificial intelligence system that acts as a co-creative partner with a musician,” Vishtomova explained. From the user’s point of view, LyricJam is very simple: the music artist plays live music and the system displays lines of lyrics that are generated in real time in response to the music they are listening to. The lines created during the session are saved, so the artist can look them after they have finished playing.
The system created by the researchers works by converting raw audio files into spectrograms and then using deep learning models to create lyrics that match the music they’ve processed in real time. The model architecture consists of two auto-varying encoders, one designed to learn musical voice acting and the other to learn song lyrics.
Vishtomova and colleagues then devised two new mechanisms to align the representations of music and song lyrics that are processed by automated encoders. Ultimately, these mechanisms allow your system to learn what types of words go well with a particular instrumental music.
To evaluate the system they developed, Vechtomova and his colleagues conducted a user study in which they asked musicians to play live music and share their feedback on the lyrics to songs created by their system. Interestingly, most of the musicians who participated in this study said they viewed LyricJam as an uncritical jam partner who encouraged them to improvise and experiment with unusual musical expressions.
“Future teen idol. Hardcore twitter trailblazer. Infuriatingly humble travel evangelist.”