We’ve heard techno and electronica. We’ve heard dubstep. Heck, we’ve even heard synthesized music written by AIs in the style of the Beatles. And not too long ago, artificial intelligence began generating sounds that humans have never experienced.
But Georgia Tech Center for Music Technology has taken it another step further with Shimon.
Shimon is a robot that uses artificial intelligence to not only compose its own music, but play it as well, on a real-life marimba.
This musically-inclined robot uses deep learning to generate its songs alongside machine vision to see its instrument and robotics technology to strike the right notes.
Since Shimon uses machine vision, it has a head that is a camera, and the movements Shimon makes are very reminiscent of human performers. It almost looks as if it’s jamming to the beat of its creation.
Here is Shimon’s first jazzy release:
In order to train its neural network to compose its own music, researchers at Georgia Technical Institute offered Shimon “more than 5,000 complete songs, two million motifs, riffs and short passages of music.”
What Shimon learned from all this music is what differentiates music from noise, at least in a statistical way. Shimon was able to analyze those songs to understand which sorts of notes belong together in musical pieces, knowledge it put to work in its own compositions.
The robotics aspect of Shimon is actually not new. Those four robotic arms have been playing marimba with human musicians who, until now, provided it the songs.
It is the addition of artificial intelligence that has pushed Shimon into the spotlight. Machine vision plus deep learning plus robotics have birthed an all-encompassing robotic, intelligent musician who can both write and play its own songs, ones that have a distinctly otherworldly feel.
As The Verge explains, this is due to the way Shimon analyzes music via its deep neural network, in short bursts rather than perceiving overall structure. Bretan, a creator of Shimon, states that Shimon’s learning occurs using neural embedding, by associating numbers to small snippets of music in order to determine which notes and beats “fit” closely together.
Here’s Shimon’s second composition:
The works Shimon was given for training include everything from classical to pop–Beethoven to the Beatles to Lady Gaga. The resulting sound has elements of all of these, although overlaid in a way that makes mathematical sense to an AI. This creates a stunning sound that is a little experimental and off-kilter, in a fabulously fascinating way. It lilts where we wouldn’t expect. In other words, it doesn’t sound like the music most humans would create, and has what some have described as “avant garde” and “jazz fusion.”
Perhaps the next big hits will be composed and performed by our AI friends from the future.