About a decade ago, a friend and I were talking over dinner about our shared passion for electronic music. Our burning question was: how can we convince the world to love techno as much as we do?
A common—and not always unfair—criticism is that electronic music can be repetitive, lacking an overall arc, or overly simplistic: especially when compared to something like a classical concerto. However, your appreciation of concertos greatly deepens once you understand the three movement structure, repeating motifs, question and answer, and all the other tricks and conventions present in classical music: we thought the same rule applied to electronic music.
In order to demonstrate the macro-structure and texture of the music, our idea was to pair the audio with illustrative visualisations.
VJs (who produce video and light shows to complement a DJ’s set) were already quite popular, but the interconnection between the DJ’s audio and VJ’s visuals was tenuous at best… Perhaps a good VJ would match the mood and rhythm of their show to the music in real-time, but they weren’t able to reify a deeper structure in the musical performance0.
These visualisations would need to reveal the patterns already present in the music. They would need to have a cohesive style to them—to ensure internal consistency bar-to-bar—but they would also need to be modulated by the music in an intuitive way. For example, perhaps spikier shapes for distorted sounds, or different colours for different tones and instruments, different size shapes for greater or lesser volumes, …
We wanted a system which could dynamically generate this imagery in real time, even for live performances.
To start off with something relatively simple, I first focussed on transforming short, single-note audio clips into small hand-drawn “squiggles”.
I applied a machine learning technique known as CycleGANs to attempt this mapping from audio to illustrative figures.
Although it’s possible to build a conventional AI system1 capable of this audio-to-video mapping, I chose to use CycleGANs instead.
Generative Adversarial Networks (GANs) are a pair machine learning networks where the networks have opposite goals. One network is called the generator: its goal is to create realistic fakes—images, video, or audio for example. In contrast, the discriminator network is trying not be be fooled by the generator’s forgeries. The generator is rewarded when the discriminator judges its output to be authentic; the discriminator is rewarded when it correctly spots a forgery, and when it correctly identifies an authentic example.
There are many applications of GANs. The best known is probably Deepfake videos that can do things like turn Donald Trump into a sassy local news reporter. One of their key benefits is that they tend to require less human feedback to become effective, because the generator and discriminator networks effectively train each other. Another benefit is that they can exhibit creative behaviour: the selection pressure exerted by each network on the other can result in strategies that humans would be unlikely to find alone.
Building on the idea of GANs, CycleGANs arrange two GANs into a loop. Rather than having a single pair of networks with opposite goals, we now have two pairs of networks. The first pair is expert at creating fakes in one domain (for example, forged photographs of horses). The second pair is expert in another domain (for example, forged photographs of zebras).
Here, the horse GAN needs more training
By hooking up the inputs, outputs, and training processes of all of these networks, they can be trained as an overall composite system—in a similar way to how a single GAN combines two networks.
In the following diagrams, the notation is:
In addition to the GANs becoming more able to create and spot forgeries, CycleGANs train the networks to increase cyclic accuracy2. What this means is that not only should each GAN’s output be realistic, but when we do a full pass through the GANs’ interconnected inputs and outputs, the final result should be as close as possible to the original input:
CycleGANs have been shown to be effective at a variety of tasks, including animal transformations (e.g. horses to zebras), season transformations (e.g. summer scenes to winter, stylistic transfer (e.g. photograph to Monet).
I wondered if I could use them to achieve a much more ambitious transformation: from sound to image.
As far as I know, CycleGANs have only been used to map between dimensionally similar domains before: images to images, audio to audio, etc. However, for this application I needed to be able to convert two-channel, 1-dimensional data (audio) to three-channel, 2-dimensional data (images). For this first version, I took the quick and easy approach of simply reshaping the data, something like this:
# Reshaping audio into an image
# This is the first step in our image generator model
audio_shape = (43264, 2)
in_audio = keras.models.Input(shape=audio_shape)
axis_size = int(math.sqrt(audio_shape[0]))
reshaped = keras.layers.Reshape((axis_size, axis_size, 2))(in_audio)
# Reshaping an image into audio
# This is the last step in our audio generator model
number_of_audio_samples = image_shape[0] ** 2
out_audio = keras.layers.Reshape((number_of_audio_samples, 2))(model)
The rest of the networks were unchanged from the original CycleGAN paper3, apart from a couple of dimensionality tweaks in the network architecture to accommodate mixing and matching audio and visual data.
Overall, the results are surprisingly effective for a quick first attempt!
p2.xlarge
spot instance in AWS (terraform config adapted from Vithursan Thangarasa)Here’s how we did:
With lower frequencies and broader sounds, the visualisations seem more spread.
Even with this naïve approach, some synaesthetic transfer is discernable:
My expectation is that the current weakest link is the naïve munging of the outputs to get data shaped like audio or image. Instead of this simplistic approach, using a Fourier transform to generate 2D output from audio seems a good idea.
For this particular output domain (the hand-drawn “squiggles” mentioned above), having the system output in SVG rather than raster format would be interesting to try. The visualisations would gain a natural cohesion, but it’s not immediately clear how to ensure the output is valid SVG… It would likely need to output an AST-like structure from which we could generate actual SVG.
As mentioned in a footnote1, building this system with conventional supervised machine learning would take a lot of time. To retrain the system for different types of sounds or different visualisation styles would require a human to create a corpus of audio-image pairs. However, with this system we could feed unicode alphabets, the works of Miró, photographs of sunsets, or any other image data into the CycleGAN without any human supervision: the networks would need to be retrained, but this takes just a few hours and costs a few dollars.
Even with some of the improvements mentioned above, how would this work for real music, when there there are tens of voices and instruments all sounding in parallel, in a continuous fashion? The prototype outlined here is only capable of working with 1-second monophonic snippets.
A couple of thoughts here:
Google have Magenta Studio which hooks directly into Ableton Live as a new breed of AI-powered synthesiser, but rather than appearing as a tool to the producer, we would be shuttling the output of a performance to our CycleGAN for the VJ to use.