meemoo hackable web apps

First steps for noflo-webaudio

05 Aug 2014 by Vilson Vieira

Before any word, let’s make some noise. You can play yourself with this demo clicking here.

This simple demo was presented by Forrest in his last Assembly talk. It illustrates how we are combining audio and visuals in NoFlo.

It uses noflo-webaudio, our wrapper library to the Web Audio API. Web Audio API defines a “signal-flow graph” where audio sources connect to processors and can be manipulated on a sample-accurate base. How to map such “signal-flow graph” to NoFlo? Having worked with noflo-canvas we wanted to explore if the same design and semantics used on it could work with Web Audio.

In noflo-canvas, each component generates lispy commands that are lazy-evaluated in a complex component (Draw). We are doing the same for noflo-webaudio. In this way, we have some components like Oscillator and Gain that sends lispy commands like the following:

{"type": "gain", 
 "input": [{"type": "oscillator", "frequency": 440.0},
           {"type": "oscillator", "frequency": 660.0}]
 0.8}

which, again, maps to a lispy representation like

(gain ((oscillator 440)
       (oscillator 660))
      0.8)

Each time a component input (like Oscillator’s frequency) changes, it sends an updated command as its output. The Play component gets all those commands and takes care of how to plug them together and update the parameters when needed.

The major difference is that noflo-canvas follows a “redraw the entire canvas everytime” paradigm while noflo-webaudio can’t reconnect all the audio nodes everytime: most of the time the audio graph doesn’t change, just the parameters. So Play should be smart enough to walk through the received commands and decide which should be reconnected (like Oscillators and AudioFile) and which should have just its parameter updated.

The JSON representation of such signal-flow graphs remembers a declarative paradigm. We are exploring a Web Audio library called Flocking which makes possible to define signal-flow graphs in a declarative way that maps directly in the way we are dealing with noflo-canvas and noflo-webaudio for now. So we should have a usable noflo-flocking soon.

We already have support for Web Audio API in Android and iOS devices so we can expect mobile music instruments in NoFlo in the near future!

We started the noflo-three components library to Three.js during this time too. We hope the same design we used for noflo-webaudio can be applied to a scene graph like that used in Three.js. We are also making some nice generative stuff with noflo-canvas that we will love to share soon.