The TSL Synthesis Synthesizer

This page explains the TSL Synthesis Synthesizer, an online synthesizer that can dynamically change singal flow topologies.The application uses program synthesis to transform user specifications to JavaScript code, which then controls the synthesizer. Users can then perform on their instrument, with the specified conditions triggering the synthesizer to automatically change the underlying signal flow topologies. This work presents a novel interface for no-code reconfigurable signal flow topologies in software.

Recently, there has been an increased interest in instruments that allow for reconfigurable signal flow topologies. The most prominent example of this setting is modular synthesis. Over the last ten years, modular synthesizers have experienced a revival; Eurorack manufacturers have nearly quadrupled from 2010 to 2020. In modular synthesis, the signal flow topology of the instrument can be changed by rerouting patch cables. This rerouting of patch cables is often a critical part of the performance that gives performers access to a rich space of possible sounds.

While modular synthesizers allow users to create complex signal flow topologies, those topologies are fixed until patch cables are manually rerouted. We propose a software system that allows users to create systems that automatically reconfigure the signal flow topology of software synthesizer. It is already possible to achieve an automatic reconfiguration - but this would require users to write custom programs that reconfigures the topology themselves. The feedback loop for writing such software is slow and cannot be used in real-time. To overcome this challenge, we propose using program synthesis.

Here, we present the TSL Synthesis Synthesizer, a web synthesizer instrument that gives users the ability to create changing signal flow topologies in real-time without writing code. The application takes user specification, and uses program synthesis to generate code to control the synthesizer. The synthesized control code takes inputs, such as MIDI signals, and automates changes as specified by the user. Using our tool, users can preconfigure multiple signal flow topologies, and step through them over the course of a performance without having to rewire a synthesizer. Users can generate control code by writing specifications manually, or by using a set of dropdowns. After the user inputs their specification, our tool uses Temporal Stream Logic (TSL) to automatically synthesize JavaScript code, and embeds the code into the website. All subsequent MIDI signals are processed through the synthesized code and change the signal flow as specified by the user.

An example of the need our tool, we present an annotated score of the popular jazz standard Autumn Leaves with a dynamically changing synthesis process.

Figure 1: Autumn Leaves rendition lead sheet.

Realizing this rendition in a DAW would be easy by using automation on the synthesizer track. However, we want to be able to perform this live. The TSL Synthesis Synthesizer allows us to do this by creating a specification stating that note E4 will toggle LFO and note G4 will toggle FM synthesis. After synthesizing a program from this specification, we can play the piece exactly as written in the score and rely on the TSL logic to manage the control flow of the effects.

A video demonstration of this rendition may be found at the following link:

https://www.youtube.com/watch?v=Rs2Q8bbbixs

The idea of reconfigurable signal flow for audio synthesis has been present since the earliest hardware synthesizers. For example, the RCA Mark II in 1957 allowed users to use patch cables to connect analog circuitry. Similarly, modular synthesis systems, such as those introduced by Buchla and Moog, also use patch cables to reconfigure signal flow. Today, tools are available to replicate this patch reconfiguring process in software- such as VCV Rack. A similar interaction modality is used in MaxMSP and PureData. While there are tools to automate switching between saved preset values or saved patch configurations - there is no way to automate changes to the topology (how the wires are connected) of the modular synthesizer without writing custom code. While our work does not yet target the modular synthesis setting, the underlying algorithm is best understood from the perspective of automating control for rerouting patch cables.

Using program synthesis for real time music performance has also been explored in the context of live coding for step-sequencers. The context of live coding a step sequencer is a simpler synthesis problem algorithmically - the code must only generate a static value that represents the state of the sequencer. In contrast, our work must synthesize a reactive system that changes signal flow topologies in response to user input.

Overview

Our tool, the TSL Synthesis Synthesizer, is open-source. Users may play the keyboard, as well as synthesize a program that automates signal flow reconfigurations using the specification interface at the top.

Figure 2: Diagram of TSL Synthesis Synthesizer configuration. Blue sections represent playable interfaces.

Frontend

Fundamentally, the webpage is a keyboard synthesizer. The keyboard can be played via one of three methods. Users can either click the keys on the webpage with their mouse, use their computer keyboard, or connect a USB MIDI keyboard.

The synthesizer includes the following sound features:

Feature

Mouse Controllables

TSL Controllables

Waveform

Sine, sawtooth, square, triangle

Sine, sawtooth, square, triangle

LFO (note vibrato)

Device: LFO on, LFO off
Depth: range from 1 to 100
Frequency: range from 0Hz to 20Hz

Device: Toggle on/off
Depth: Increase/Decrease by <user specified amount>
Frequency: Increase/Decrease by <user specified amount>

AM Synthesis

Device: AM on, AM off
Frequency: range from 0Hz to 1000Hz

Device: Toggle on/off
Depth: Increase/Decrease by <user specified amount>
Frequency: Increase/Decrease by <user specified amount>

FM Synthesis

Device: FM on, FM off
Frequency: range from 0Hz to 1000Hz

Device: Toggle on/off
Frequency: Increase/Decrease by <user specified amount>

Filter

Device: Filter on, Filter off
Type: low-pass, high-pass, band-pass
Q: range from -4.0 to 3.0
Cutoff: range from 20Hz to 10000Hz

Device: Toggle on/off
Type: low-pass, high-pass, band-pass
Cutoff: Increase/Decrease by <user specified amount>
Q: Increase/Decrease by <user specified amount>

Harmonizer

Device: Harmonizer on, Harmonizer off
Interval: range from -12 to 12 semitones

Device: Toggle on/off
Interval: Increase/Decrease by <user specified amount>

Arpeggiator

Device: Arpeggiator on, Arpeggiator off
Style: up, up-down, down, random
Rate: range from 1Hz to 50Hz

Device: Toggle on/off
Style: up, up-down, down, random
Rate: Increase/Decrease by <user specified amount>

Through the UI, the users can define their specification either through dropdown menus or by writing a TSL formula. If the users select the dropdown options, then the program parses the input and produces the TSL formula accordingly.

Once a specification is defined, users may click the “Synthesize!” button to synthesize the temporal control flow that alters the sound settings while the keyboard is played.

Backend

From a technical perspective, the key contribution of this work is the usage of program synthesis to automatically generate code based off users’ specifications.

Temporal Stream Logic is a logic for describing reactive systems - systems that infinetly consume input and produce output over time. In our case, our reactive system consumes user input (i.e. MIDI input) and produces output as signal flow rerouting (e.g. toggling an LFO). TSL formulae operate on an abstract notion of “time,” which moves forward with each reactive input; in our example, each new MIDI signal moves time a step forward. TSL includes temporal operators such as “next” [\bigcirc], “always” [\square], “eventually” [\diamond], “until” [U\mathcal{U}], “weak until” [W\mathcal W], “release” [R\mathcal R], and “as soon as”[A\mathcal A]. It also includes boolean logic operators such as “and”[\wedge], “or” [\vee], “implies”[\to], and “if and only if” [\leftrightarrow].

Using TSL, we can define the desired signal flow for the motivating example:

( play note67([amtoggle am]))( play note64([lfotoggle lfo]))(\square \ \texttt{play}~\texttt{note67} \leftrightarrow (\bigcirc [\texttt{am} \leftarrow \texttt{toggle}~\texttt{am}])) \land \\ (\square~\texttt{play}~\texttt{note64} \leftrightarrow (\bigcirc [\texttt{lfo} \leftarrow \texttt{toggle}~\texttt{lfo}]))

We can also create more complex control formulae, such as the following:

 play note60([amtoggle am]) W play note67\square~\texttt{play}~\texttt{note60} \to \bigcirc (\square [\texttt{am} \leftarrow \texttt{toggle}~\texttt{am}])~\mathcal W~\texttt{play}~\texttt{note67}

After obtaining the TSL formula, the backend then checks if the specification is realizable (there exists a program that satisfies the specification). If it is unrealizable, the user is asked to change their specification; if it is realizable, the program synthesizes JavaScript code to work with the WebAudio and WebMIDI code present in the web application. All subsequent MIDI signals and keyboard presses will follow the specification, dynamically altering the signal flow topology when the users’ conditions are met.

Future Work

Our prototype currently provides two interfaces by which a user can provide a specification. Our goal is to use this first iteration of our instrument to further explore the space of possible interfaces for generating specifications for program synthesis. The current dropdown selection model is limited in its ability to capture complex temporal relationships between predicates (e.g. play note60) and actions (e.g. increase AM modulation frequency). Additionally, we hope to expand the set of predicates beyond MIDI input signals to additionally capture arbitrary properties on the internal signals of the synthesizer.

One of the key challenges we have identified through our pilot testing is in communicating the mode of thinking a user must adopt when working with program synthesis framework. Program synthesis asks users to give properties of the desired behavior - not to define the complete desired behavior. We need to find interface designs that make it clear that specifying a program is different than writing a program.

Additionally, while our current implementation in WebAudio is accessible, it has limited application in professional production. After finding an interface design that is effective, we plan to migrate this tool to a framework more optimized for professional audio, such as a VST3 plugin. Another potential future direction is to build the program synthesis functionality into a digital hardware synthesizer.