Real Time sound synthesis with Jupyter

This notebook demonstrates using WebAudio API and Web Sockets to transfer audio generated with Python/Numpy to Jupyters notebook Web frontend. The notebook can be downloaded from my Github repository.

#Install support for websockets
pip install autobahn
pip install ipywidgets
pip install plotly
Requirement already satisfied: autobahn in /usr/local/lib/python3.5/dist-packages
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.5/dist-packages (from autobahn)
import asyncio
from autobahn.asyncio.websocket import WebSocketServerProtocol, WebSocketServerFactory
#import websockets
import numpy
import threading
import time
import plotly
from plotly.graph_objs import Scatter, Layout

from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets

server = None
loop_thread = None

Creating the signal

The signal to play back is generated by the Python backend through the use of Numpy. It's a simple signal consisting of 5 sine waves.

buffer_size = 4096
sample_rate = 44100.0

#note float32 is essential, the same format is used by javascript
#could be that when endianness is not equal between client and server
#additional processing is needed
buffer = numpy.zeros(buffer_size,dtype=numpy.float32)

#create an array of 5 frequencies
component_count = 5
sine_frequencies = numpy.linspace(start = 200, stop = 400, num = component_count)
#reshape from (5,) to (1,5)
sine_frequencies = numpy.reshape( sine_frequencies, (1, component_count))

#create a (1,5) array with the phase change per sample
sine_frequencies_angle_per_s = ( sine_frequencies * 2.0 * numpy.pi ) 

#store the start phases for each buffer fill
#by using a start phase you won't get glitches in the signal
#when chaning the frequency
sine_start_phases = numpy.zeros(sine_frequencies.shape)

#create buffers for each sine component -> (buffer_size,components)
sine_components = numpy.zeros(

#calculate the time of each sample in seconds (buffer_size,1)
sample_times = numpy.reshape(numpy.linspace(start = 0,stop = buffer_size / sample_rate, num = buffer_size),(buffer_size,1))

def setSineFrequency(component, frequency):
    """This function is called when a slider is moved"""
    global sine_frequencies, sine_frequencies_angle_per_s
    sine_frequencies[0,component] = frequency 
    sine_frequencies_angle_per_s = ( sine_frequencies * 2.0 * numpy.pi )

# Create a bunch of sliders for each frequency component
for component in range(sine_frequencies.shape[1]):
             component = fixed(component),
             frequency = widgets.FloatSlider(

def fillBuffer():
    """Fill a single buffer with the sum of the sine components"""
    global sine_start_phases, sine_components    
    sine_phases = sine_start_phases + sample_times * sine_frequencies_angle_per_s
    sine_components = numpy.sin(sine_phases) * 1 / component_count
    numpy.sum(sine_components, axis = 1, out = buffer)
    sine_start_phases += (buffer_size + 1) * sine_frequencies_angle_per_s / sample_rate
# To test
glued = numpy.zeros(2*4096)
glued[0:4096] = buffer[0:4096]
glued[4096:8192] = buffer[0:4096]

In the actual notebook there will be shown some sliders with which you can change the frequencies of the components

x = plotly.offline.iplot({
    "data": [Scatter(y=glued)],
    "layout": Layout(title="Two buffers")

Python backend for websockets

Python can support websockets via the autobahn package. Autobahn needs an event loop system. The example uses asyncio.

#Create a thread to run the asyncio loop
#It calls loop_run_forever

loop = asyncio.get_event_loop()

class WebIOThread(threading.Thread):
    def __init__(self):
        self.loop = asyncio.get_event_loop()
        self.should_quit = False
    def quit(self):
        self.should_quit = True
        if (self.loop.is_running):
    def run(self):
        self.running = True
        print("thread start")    
        while (not self.should_quit):
            print("start loop")
            print("loop exited")
            if (not self.should_quit):
        print("thread exit")    
        self.running = False
#Define a simple websocket protocol that handles requests
#for a sample buffers
class SignalGeneratorProtocol(WebSocketServerProtocol):
    def onConnect(self, request):
        print("Client connecting: {}".format(request.peer))

    def onOpen(self):
        print("WebSocket connection open.")

    def onMessage(self, payload, isBinary):
        # if the client sends (any) message then
        # assume it wants a new buffer of samples
        self.sendMessage(buffer.tobytes(), True)
    def onClose(self, wasClean, code, reason):
        print("WebSocket connection closed: {}".format(reason))
factory = WebSocketServerFactory()
factory.protocol = SignalGeneratorProtocol
#Ever time this cell is executed, cleanup
#previous servers

if server is not None:
    print("Close existing server")
    server = None

if (loop_thread is not None):
    print("Close existing loop thread")
    loop_thread = None
loop = asyncio.get_event_loop()
coro = loop.create_server(factory, '', 8889)
server = loop.run_until_complete(coro)

loop_thread = WebIOThread()
Close existing server
Close existing loop thread
loop exited
thread exit
thread start
start loop

Javascript audio client

The following javascript initializes WebAudio and defines a ScriptProcessor which is called periodically to fill the audio buffer. If there are less then 10 audio buffers queued then request more by sending a request via the websocket.

Execute this cell to start play back.


if (!window.audioContext) {
    window.audioContext = new AudioContext();

// Connect a web socket to a localhost server on port 8889
var ws = new WebSocket("ws://");
ws.binaryType = 'arraybuffer'; = ws;

// bufferSize should be the same on client and server
var bufferSize = 4096;
var player = (function() {
    var sampleNumber = 0;
    var node = window.audioContext.createScriptProcessor(bufferSize, 1, 1);
    node.onaudioprocess = function(e) {
        var output = e.outputBuffer.getChannelData(0);
        // does the websocket receive any buffers with samples
        if (ws.received_buffers.length>=1) {
            // playback the first received buffer
        if (ws.received_buffers.length<10) {
            // The number of sample buffers in the queue is getting low
            // request more buffers
            if (ws.readyState==1) {
                // Web Socket is connected, send data using send()
                ws.send("Get block");
                ws.send("Get block");
    return node;

// called when the  websocket opened
ws.onopen = function()
  console.log("Websocket connected");
  ws.received_buffers = [] ;
  ws.received_frames = 0
  // request a buffer of samples
  ws.send("Get block");
  ws.is_connected = false;

ws.onmessage = function (evt) 
  // a new buffer of samples was received
  ws.received_buffers.push(new Float32Array(; //.set(;
  ws.received_frames += 1;
  if (!ws.is_connected) {
      ws.is_connected = true;

ws.onclose = function()
  ws.is_connected = false;
  // websocket is closed.
  console.log("Connection is closed..."); 

Client connecting: tcp:
WebSocket connection open.

Javascript audio client

The following javascript initializes WebAudio and defines a ScriptProcessor which is called periodically to fill the audio buffer. If there are less then 10 audio buffers queued then request more by sending a request via the websocket.

Execute this cell to start audio output.


WebSocket connection closed: None


The notebook shows that is possible to synthesize audio from within Python and add interactivity using IPyWidgets. Next step is to record audio using the Web Audio API, send it to the backend, process it and then send it back to the front end for playback.





LSTM neural network for sequence learning


Generating text using an LSTM neural network