I’ve been working on a funky new homebrew software project called StandingWave: a musical audio processing engine built entirely in Flash. My goal in doing this is to explore a world of online interactive music applications in which audio is not merely played back, but generated on the fly — performed, in fact — from an underlying representation of musical events. Such applications might range from a traditional music notation editor to game-like music composition environments to… knows?
Computer music performance is hardly something new, of course. But embedding the capability in Flash, at this point in the world, can make musical applications accessible on the web and amenable to community use in a way that’s never been possible before. Think about what applications like Buzzword and Google Spreadsheets are doing for traditional “productivity apps”.
I’ve started with the audio engine because it’s an interesting technical challenge, although I’m working on some of the other pieces concurrently. I’ve put up an initial crude example that demonstrates sample-based waveform synthesis. This toy application can play back single notes, a chromatic scale and a sample MIDI file at various transpositions, tempos and volumes, and all of this is accomplished by actually synthesizing digital audio signals on the fly, starting from a set of recorded guitar samples and applying gain envelopes, frequency shifting and mixing. Musically, it’s hardly exciting, but it’s a start on the capabilities needed to concretely deliver music in the Flash Player with no external add-ons, and without leaning on the crappy, highly variable MIDI playback delivered by the browser’s native OS.
There’s no waveform audio output API in Flash, so how is this done? Read on…
The basic idea is as follows: write a SWF file into a ByteArray in memory as raw bytecodes that encode an audio stream as an embedded sound asset. This byte array is exactly what would be produced if, in the Flash IDE, one imported a sound into the Library and exported it for runtime ActionScript usage. Then, use
Loader.loadBytes() to load this in-memory SWF. Once loaded, use
getDefinitionByName() on the loaded SWF to instantiate the exported asset as a
Sound object, at which point it can be played directly via the Flash media API.
There’s more to StandingWave than that, of course. Internally, there is a pipeline of objects representing music sources, audio sources, filters, mixers and performance objects. WAV and MIDI formats need to be handled. There are abstract notions of pitch, rhythm, tempo and temperament. The audio is processed in small chunks to avoid tying up the main event loop, and there are some very non-obvious tricks required to start audio chunks at the correct time, so that they are played successively with reasonable time synchronization. (Consider that the Flash Player doesn’t ever guarantee that some chunk of code will be run at any exact point in time, whether you’re listening for Timer events or enterFrame.) Although playback is almost flawless on my Mac, I’m still having some issues on Windows at the beginning of a playback sequence. There is lots more work to do.
(I came up with the audio-output trick independently, but I’m not the first one to think of it. The Popforge open source project is one earlier realization, and there may be others. Popforge is geared more to audio streaming than to musical event processing, and I think I’ve improved on its approach by not attempting to maintain a continuous digital audio output stream, but instead overlapping audio chunks that always start cleanly with a new musical event. The same engine (or rough approach) appears to power the recently announced Splice Music site which exhibits some of the same characteristics as the kind of apps I’m interested in building.)
Anyway, more to come, but I thought I’d get something initial out there as food for thought, while I continue to make progress.