jsaSound for Sound Modelers

jsaSndLib helps you do 2 things:

  1. Provide your sound users with a simple and consistent API so your sounds are easy to use and reuse in different applications, and
  2. Provide you with some tools to support the development of rich and interesting interactive sound models.


There are docs, but examples might be more helpful.

You will typically create your sound models by connecting audioNodes together just as your normally do when using the WebAudio API. Then use the jsaSound Library magic.
There is a "base sound model" function object that gives you:

  1. the interface methods that users call to interact with the model (play(), release(), stop(), setParam(), setParamNorm()),
  2. callbacks to onPlay(), onRelease(), onStop() for you to use for sound-specific actions,
  3. methods for loading resources, queueing events in the future, etc.,
  4. a wrapper that makes your entire sound model "quack like an audioNode", so that you can use it in audio graphs (using connect()) with other Web Audio audioNodes. The provider of this ultra-useful capability is Kumar Subramanian who calls these things GraphNodes.
This is the object your sound model factory returns to provide access to the interface. Hence you call it thus:

The last argument is an array of nodes your model will use to connect to other audioNodes. The second argument is an array of nodes that other audioNodes will use to connect to your model. The first argument is a mystery argument.

One of the key features of jsaSound is the standardized interface it presents to the world.
To expose a parameter for the model user:

After that, model users can call the setParam and getParam family of methods in the jsaSound user interface.

play()
release()
stop()

setParam([name, number],val)
setParamNorm([name, number], val)

getParam([name, number], ["name", "type", "val", "normval", "min","max"]) // type can return "range" or "url"

getNumParams()
getAboutText();


So here is an example using the basic jsaSound facilities described above. I create a Formant synthesizer model with parameters for setting each of four formants. Then I construct a Vowel sound model that uses the Formant synthesizer just like an audioNode:
First, I create create an instance of the Formant model:

then I connect it to my Vowel model gain node:

Now my Vowel model uses the "user" API for the Formant model to control it:

If you want to register a parameter of a jsaSound model you are using like an audioNode on the model that is using it, there is a handy shorthand for that:

which causes the FormantSynth paramters to show up as user paramters on the Vowel model.

A few other things you can do with the jsaSound tools:

Examples

But the jsaSound project has it roots in creating a standard API for all sound models so that any application (and web developer) could use any and all sound models in a consistent way. Some examples: