Published online by Cambridge University Press: 11 January 2001
The challenge of composing both sound and moving image within a coherentcomputer-mediated framework is addressed, and some of the aesthetic issueshighlighted. A conceptual model for an audiovisual delivery system isproposed, and this model acts as a guide for detailed discussion of someillustrative examples of audiovisual composition. Options for types of scoregenerated as graphical output of the system are outlined. The need forextensive algorithmic control of compositional decisions within aninteractive framework is proposed. The combination of Tabula Vigilans AudioInteractive (TVAI), an algorithmic composition language for electroacousticmusic and realtime image generation, with MIDAS, a multiprocessoraudiovisual system platform, is shown to have the features desired for theconceptual outline given earlier, and examples aregiven of work achieved using these resources. It is shown that ultimatelydelivery of new work may be efficiently distributed via the World Wide Web,with composers' interactive scripts delivered remotely but renderedlocally by means of a user's ‘rendering black box’.