Using the Web Audio API you can create and process sounds in any web application, right inside the browser.
The capabilities of the Web Audio API are governed by a W3C draft standard. It was originally proposed by Google and has been under development for several years. The standard is still being worked on, but the API is already widely implemented across desktop and mobile browsers. It is something we can use in our applications today.
In the first article we constructed sine wave oscillators that had one particular frequency: 440Hz, or the A4 standard note. In this article we'll see how we can vary the frequency and how this results in different audible pitches.
In this first edition, we'll talk about how digital audio can be represented and played with Web Audio. We'll make our very first sound, which will be based on the sine wave.
Learning Web Audio by Recreating The Works of Steve Reich and Brian Eno
Posted on by Tero Parviainen
Systems music is an idea that explores the following question: What if we could, instead of making music, design systems that generate music for us?
This idea has animated artists and composers for a long time and emerges in new forms whenever new technologies are adopted in music-making.
In the 1960s and 70s there was a particularly fruitful period. People like Steve Reich, Terry Riley, Pauline Oliveros, and Brian Eno designed systems that resulted in many landmark works of minimal and ambient music. They worked with the cutting edge technologies of the time: Magnetic tape recorders, loops, and delays.
With Web Audio we can do something Reich, Riley, Oliveros, and Eno could not do all those decades ago: They could only share some of the output of their systems by recording them. We can share the system itself. Thanks to the unique power of the web platform, all we need to do is send a URL.