Adobe threw a slight curve ball with the long-awaited release of Flash Player 10 today. The release itself wasn’t the curve ball (although not everyone expected it to happen on thisdate). The surprise was the audio latency using the new dynamic sound generation APIs.
Audio latency, for programs that synthesize sound on the fly, is roughly defined as the amount of time between the program creating a sound and the computer’s audio output actually playing that same sound. Latency is important because it affects the user experience. If a user hits a “play” button and waits 1 second before hearing anything, that can be annoying. If a user types a note into a music notation editor and has to wait 1 second before hearing anything, that’s downright maddening.
Latency isn’t all bad, though. The higher the latency, the more “give” the sound output pipeline has, and the more resistant it is to disruption by other processes running in your browser and on your computer. Ideally you want to strike a balance between latency and robustness in your sound output.
In Noteflight, the latency on Mac OS X had been at a comfortable 30 milliseconds — about a 30th of a second — using the Flash 10 beta. But in the released production version, Noteflight’s audio latency suddenly jumped to about 850 milliseconds. Other folks with audio synthesis apps reported other numbers, all of them different. Whoa! What happened? (Before going further, let me say that our latency is now back at 250 milliseconds, so things aren’t as bad as they first seemed.)
Adobe introduced a new behavior into the released player, where the latency depends on the number of samples you provide to SampleDataEvent. If you provide 2048 samples per callback, your latency will be the minimum available on the platform (30 ms on Mac, ~250 on Windows). If you provide 8192 samples per callback, on the other hand, your latency will be in the neighborhood 1000 milliseconds. The amount of latency for in-between sample block sizes is a complex function of this number. Tinic Uro will hopefully publish more information about that function, which apparently involves a power-law nonlinear function of the block size combined with the native sound driver buffer size, but experimentally I have seen these numbers on Mac OS X:
block size latency (ms)
This is good stuff to know, and it’s something of a surprise to us all. I notched the sample size in Noteflight back down to 4096 in a hurry, and latency is acceptable once again. Or, semi-acceptable. I’d like to get it even lower, but I don’t want to do that in too much of a hurry without adequate testing and tuning of the synthesis pipeline.
[Important footnote (10/16/08): You might think that by adjusting the sample block size dynamically, you can change the latency on the fly for a single Sound object that is streaming samples. The answer is… sort of, but not in a useful way. You can make the latency increase by making the sample block size larger, but if you make it smaller again, the latency will not go back down: it will stay at the higher value. What is reported to work, but I haven’t tried it yet, is to output sound using more than one Sound object, each with its own block size (and hence its own latency value). The usefulness of that approach is also not clear.]