jsaSound for Sound Modelers
jsaSndLib helps you do 2 things:
- Provide your sound users with a simple and consistent API so your sounds are easy to use and reuse in different applications, and
- Provide you with some tools to support the development of rich and interesting interactive sound models.
There are docs, but examples might be more helpful.
You will typically create your sound models by connecting audioNodes together just as your normally do when using the WebAudio API. Then use the jsaSound Library magic.
There is a "base sound model" function object that gives you:
This is the object your sound model factory returns to provide access to the interface. Hence you call it thus:
- the interface methods that users call to interact with the model (play(), release(), stop(), setParam(), setParamNorm()),
- callbacks to onPlay(), onRelease(), onStop() for you to use for sound-specific actions,
- methods for loading resources, queueing events in the future, etc.,
- a wrapper that makes your entire sound model "quack like an audioNode", so that you can use it in audio graphs (using connect()) with other Web Audio audioNodes. The provider of this ultra-useful capability is Kumar Subramanian who calls these things GraphNodes.
The last argument is an array of nodes your model will use to connect to other audioNodes. The second argument is an array of nodes that other audioNodes will use to connect to your model. The first argument is a mystery argument.
One of the key features of jsaSound is the standardized interface it presents to the world.
To expose a parameter for the model user:
After that, model users can call the setParam and getParam family of methods in the jsaSound user interface.
setParamNorm([name, number], val)
getParam([name, number], ["name", "type", "val", "normval", "min","max"]) // type can return "range" or "url"
So here is an example using the basic jsaSound facilities described above. I create a Formant synthesizer model with parameters for setting each of four formants. Then I construct a Vowel sound model that uses the Formant synthesizer just like an audioNode:
First, I create create an instance of the Formant model:
then I connect it to my Vowel model gain node:
Now my Vowel model uses the "user" API for the Formant model to control it:
If you want to register a parameter of a jsaSound model you are using like an audioNode on the model that is using it, there is a handy shorthand for that:
which causes the FormantSynth paramters to show up as user paramters on the Vowel model.
A few other things you can do with the jsaSound tools:
- Schedule function calls in the future (like setTimeout, but it manages a single queue for lots of events to use fewer timer resources). Good for rhythmic patterns or for stopping sounds after a "release" segment.
- Use the poly object to create a pool of instances of sound models so you don't have to keep track of them all yourself,
- Use what is still a very small collection of "OpCodes" for creating objects that hide unsightly Web Audio API verbosity,
- Load and manage audio resources. The audio resource manager insures that a network request is only made once for a given audio URL (no matter how many models or polyphonic pools need it, and hides all the XMLHttpRequest nonsense from you, as well). The loadAudioResources method does it all, calling a function you provide to receive your buffer all ready to go.
- Use what is still a very small collection of utils for converting between MIDI notenumbers and frequencies, gain values and dB, etc.,
But the jsaSound project has it roots in creating a standard API for all sound models so that any application (and web developer) could use any and all sound models in a consistent way. Some examples:
- The "slider box" controller (at aniamtedsoundowrks.com ).
- The dynamic score interface (the applicaiton is a shared graphical space for notating gestures that control sound models on a scrolling score),
- Sound tied to graphical behavior.