A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. Synthesis is possible with less rigid time and frequency-component structure than with simpler schemes, and it allows much of the speech knowledge required for synthesis to be removed from the main driving structure and embodied as tables and procedures that may easily be modified or replaced. The program also meets real-time operation and memory-size constraints. The resulting view of speech structure, at the acoustic-segmental level, is that of time-ordered, perceptually relevant events, and is related to that used in the author's work on automatic speech pattern discrimination. The flexibility of the scheme for synthesis, and the excellent mutual independence of the many processes, with differing objectives, that must be run for realistic approximations to real speech variation, have proved a welcome release from earlier problems.