Remember wannabe pop star Rebecca Black’s much-maligned song Friday from a few years back? As poor as the song itself was, it did give us one brilliant spinoff: The enterprising work of YouTuber HeyMikeBauer, who performed a cover of the song in the style of legendary folk singer Bob Dylan.
If you liked that (and, based on its YouTube views, a whole lot of you did), a new artificial intelligence project may be exactly what the doctor ordered. What researchers at the U.K.’s Birmingham City University are working on is a neural network project they hope will one day predict how a piece of music might have sounded had it been created by an earlier artist — and then generate it for you. Looking for a Pink Floyd cover of Jay-Z? How about a Beethoven symphony re-creating (or, well, pre-creating) The Beatles’ seminal Sgt. Pepper’s Lonely Hearts Club Band? You’ve come to the right place!
“The idea is that we could train a neural network with the work of a musician,” Islah Ali-MacLachlan, senior lecturer in sound engineering, told Digital Trends. “We would use a range of tracks as an input, and the network would automatically detect the start and end of each individual note, the harmonic content, and other important classification data. Based on this we would then input your playing — perhaps a melody or guitar solo — and the system would change your audio. Imagine the phone apps that turn your photo into a Monet or Van Gogh — this would do the same for recordings.”
Ali-MacLachlan says that the project is still in its early stages, with the focus right now being on traditional Irish flute music. “It is difficult for a computer to determine when a note changes when there may not be a pronounced attack like a plectrum hitting a string or a stick hitting a drum head, but we have a system that can deliver 90 percent accuracy in some contexts,” he said. “We have also developed some techniques for classifying timbre and looking at key differences between players. At present, we are working on being able to automatically define different notes to train the neural networks and from there we will start to look at how we can influence the outputs.”
The overall goal is enormously ambitious, but, hey, wouldn’t we have said the same thing about self-driving cars or computers that can beat humans at Go just a few years back? With AI increasingly capable of learning to impersonate voices based on training data, this may be closer than we think.