The synthesis of new audio during a game's execution can be useful, especially in response to unforeseen or rare events. In this chapter, I look at how to generate tone sequences for sampled audio and how to create MIDI sequences at runtime. The discussion is split into two main parts: synthesis of sampled audio and synthesis of sequences. I finish by describing additional libraries and APIs that can help with audio generation.
Sampled audio is encoded as a series of samples in a byte array, which is sent through a
SourceDataLine to the mixer. In previous examples, the contents of the byte array came from an audio file though you saw that audio effects can manipulate and even add to the array. In sampled audio synthesis, the application generates the byte array data without requiring any audio input. Potentially, any sound can be generated at runtime.
Audio is a mix of sine waves, each one representing a tone or a note. A pure note is a single sine wave with a fixed amplitude and frequency (or pitch). Frequency can be defined as the number of sine waves that pass a given point in a second. The higher the frequency, the higher the note's pitch; the higher the amplitude, the louder the note.
Before I go further, it helps to introduce the usual naming scheme for notes; it's easier to talk about note names than note frequencies.
Notes names are derived from the piano keyboard, which has a mix of black and white keys, shown in ...