Using computers and synthesisers to make music isn’t new. But giving computers the ability to create new sounds pretty much on their own – that is, with machine learning – is, and Google’s Brain team just got there.

A research group on the Brain team called Magenta, which is responsible for understanding how to use machine learning in the arts, has been working on a computer program named NSynth.

NSynth, which stands for “neural synthesiser”, is essentially a complicated algorithm that creates new sounds. That might sound basic, but as the video above explains, NSynth doesn’t just mix or blend sounds – it learns the acoustic properties of sounds, combines the characteristics that make each sound sound like it does, and then creates a completely new sound.

As Doug Eck, a research scientist on Magenta, clarifies in the video above, “We’re using neural networks to generate sound. Not the actual note that’s playing, but the sound of the instrument.”

Another group from Google, the Creative Lab, then took NSynth and created a musical instrument, a synthesiser of sorts, called NSynth Super. The video below explains how the instrument and the actual algorithm actually work to create new sounds. The instrument combines the properties of two basses, one electric piano, and a sitar to create an entirely new sound with qualities of each instrument.


What this means is that musicians can now create entirely new sounds using NSynth, like Hector Plimmer did in the video above.

Google hasn’t offered the NSynth Super for purchase (yet) but they have provided comprehensive – and rather complicated – instructions for making your own from scratch.

Google’s Artificial Intelligence team has also already successfully demonstrated a virtual pianist called AI Duet, which can use machine learning to play with human pianists.