Telerik blogs
How ToT2 Dark_1200x303

Understand more about the Web Audio API, an API that allows us to create and manage sounds in our browser very easily.

Before the release of HTML5, creating applications that were going to use audio was very difficult. The poor support of browser APIs was the main cause, and developers would have to search and use alternatives that weren’t quite conventional or appropriate for the job.

Can you imagine how developers managed to use audio on the web before the creation of the <audio> element for example? Some developers used tools that are quite dead today, such as Flash, Silverlight or another third-party plugin. If you don’t know exactly what those plugins are or what purpose they served, you missed out on a lot of difficult-to-build audio applications—lucky you.

Back in 2011, browsers started to have support for some audio features, such as the <audio>, <video>, <source> and other elements. This was a game-changer and improved a lot of applications that depended on these features. Developers were now able to use audio in their applications, but the support wasn’t ideal yet.

Now, with the Web Audio API, we’re able to create awesome applications using audio on the web without having to create or use third-party libraries, and, by default, we have good support for all browsers nowadays.

Web Audio API

Let’s start with the basics about the Web Audio API. This is how the API works:

  1. The Web Audio API has a main audio context.
  2. Inside that audio context, we can handle and manage our audio operations. The audio operations are handled by audio nodes.
  3. We can have a lot of different audio nodes inside the same audio context, allowing us to create some nice things such as drum kits, synthesizers, etc.

Let’s create our first audio context using the Web Audio API and start to make some noise in our browser. This is how you can create an audio context:

const audioContext = new (window.AudioContext || window.webkitAudioContext);

The audio context is an object that contains all stuff audio related. It’s not a good idea to have more than one audio context in your project—this can cause you a lot of trouble in the future.

The Web Audio API has an interface called OscillatorNode. This interface represents a periodic waveform, pretty much a sine wave. Let’s use this interface to create some sound.

Now that we have our audioContext const initiating the audio context, let’s create a new const called mySound, passing the audioContext const and calling the createOscillator method, like this:

const mySound = audioContext.createOscillator();

We created our OscillatorNode, now we should start the mySound, like this:

mySound.start();

But, as you can see, it’s not playing anything in your browser. Why? We create our audioContext const initiating the audio context, but we didn’t pass any destination to it. We should always pass a property called destination to our audioContext const, otherwise, it won’t work.

So, now, just use the mySound const, call a method called connect and pass our audioContext.destination, like this:

mySound.connect(audioContext.destination);

Now we’re using the Web Audio API to very easily create noises in our browser.

Properties

The OscillatorNode has some properties, such as type. The type property specifies the type of waveform that we want our OscillatorNode to output. We can use 5 forms of output: sine (default), square, sawtooth, triangle and custom.

To change the type of our OscillatorNode, all we must do is pass after the start() method a type to our mySound, like this:

mySound.type = "square"

The OscillatorNode also has another property called frequency. We can use this property to represent the oscillation of our OscillatorNode in hertz.

To change the oscillation of our OscillatorNode in hertz, we must call the frequency property, and call the setValueAtTime function. This function receives two arguments: the value in hertz and our audio context. We can use it like this:

mySound.frequency.setValueAtTime(400, audioContext.currentTime);

By using the Web Audio API, we can manage audio pretty easily now in our browsers, but if you’re wanting to use this API to create something more difficult and powerful, you’ll probably need to use a library for it.

To make things easier for us, we can use a JavaScript audio library for the job, and the most-used JavaScript audio library now is Howler. With this library we can create fancier examples and make use of the full powers of the Web Audio API.

Howler

The Howler JavaScript audio library was created to help developers work in a more reliable and easier way with the Web Audio API, making work with audio easier and more fun. This library has a lot of features and advantages, such as:

  • Easy to learn —Howler is very easy to learn and get started. It also has very decent and well-explained documentation.
  • Support — With this library you don’t have to worry if your code will run on older browsers. Since this library uses Web Audio API by default and relies on HMTL5 Audio, your code will run in almost every browser, even the older browsers like IE9.
  • Full control— You can control basically everything, from the src of the audio to volume, seek, rate, autoplay, mute, etc.
  • Zero dependencies — This library does not have any plugin or dependency code, making the library lighter and less prone to bugs or errors.

Now that we know a little bit about this library, let’s start to use it. Let’s create a simple player. First let’s create a folder called player, and inside that folder we’re going to create a package.json.

yarn init -y

Now, we need to install the Howler library.

yarn add howler

Now, let’s create an HTML file called player and, inside that file, we’re going to create two simple buttons: one button to play the audio, and the other one to pause. Also, we’re going to import the index.js file that we’ll create later in our HTML file.

Our HTML file should look like this:

<!DOCTYPE _html_>
<html>
<head>
  <title>Understanding the Web Audio API</title>
  <meta _charset_="UTF-8" />
</head>

<body>
  <div>
    <button _id_="play">Play</button>
    <button _id_="pause">Pause</button>
  </div>
  <script _src_="./index.js"></script>
</body>
</html>

Now, let’s create our index.js file and import the Howler package inside it.

const { Howl } = require('howler');

Next, we should create a new const called sound, which will be an instance of Howl. The Howl class can receive a lot of options, such as src, volume, loop, etc. If you want to learn more about all properties, you can read the documentation here.

We’re going to use three options for now: src to pass our audio source, volume which will be 0.5, and preload which will be true.

const sound = new _Howl_({
src: 'http://eatandsleep.net/billboard/1988/10-Rick%20Astley%20-%20Never%20Gonna%20Give%20You%20Up.mp3',
volume: 0.5,
preload: true,
});

Now, let’s create two simple functions to play and pause our audio. We’re going to create a playAudio function to play our audio, and a pauseAudio function to pause our audio. Also, we need to add an event listener to the buttons, so every time we click the buttons, the respective function of each button is invoked.

function playAudio() {
sound.play()
}
function pauseAudio() {
sound.pause();
}

const play = document.getElementById("play");
play.addEventListener("click", playAudio, false);

const pause = document.getElementById("pause");
pause.addEventListener("click", pauseAudio, false);

Now, click on the play button and we should have our player working very well!

The Howl instance has a lot of different methods that you can use and compose to get a pretty nice final result in your application. For example, let’s use two methods that are used in a lot of audio applications: the skip forward and skip backward buttons.

Let’s create these two new buttons: one to skip 10 seconds back, and another one to skip 10 seconds forward. For each button, we’re going to pass a unique ID.

<button _id_="back">-10</button>
<button id="forward">+10</button>

Now, back in our index.js file, we’re going to create two functions pretty much the same as the first two that we created, but now, inside of each function, we’re going to increment the value of the seek, like this:

function backAudio() {
sound.seek(sound.seek() - 10);
}

function forwardAudio() {
sound.seek(sound.seek() + 10);
}

Now let’s add two event listeners to our buttons, so every time we click a button it’ll call the button’s respective function.

const back = document.getElementById("back");
back.addEventListener("click", backAudio, false);

const forward = document.getElementById("forward");
forward.addEventListener("click", forwardAudio, false);

We have now a very decent player by using Howler, this simple but powerful JavaScript audio library. I’d really recommend you to learn more about Howler and build more stuff with it. It’ll help you a lot if you’re building anything related to audio on the web.

Conclusion

In this article, we learned more about the Web Audio API and how this API can help us work with audio in modern browsers. It opens a lot of possibilities for us to create more complex applications managing audio on the browsers, and we can apply and use this API in our applications to make them more interactive.


Leonardo Maldonado
About the Author

Leonardo Maldonado

Leonardo is a full-stack developer, working with everything React-related, and loves to write about React and GraphQL to help developers. He also created the 33 JavaScript Concepts.

Comments

Comments are disabled in preview mode.