4'33" (as performed by your browser)

This is a collection of various browser-based players that can perform John Cage's 4'33" composition.
True to the original spirit, each player might cause an slightly different performance to take place, as each player slightly differs from the rest of players, but the end result is similar: you hear an interpretation of 4'33".
Some players do very little overall. Some do a bit more of upfront work and then act as if the metaphorical piano lid had been closed at the beginning of the performance. Some of the upfront work briefly blocks the UI thread (not that you would notice... generally); others actively generate no sound, others do generate sound, albeit in an optimised, but equally quiet, manner.
Any fan or hard drive noises emitted by the device the player is running on, any increment in temperature, and any other external noises that might occur during a performance are all part of the performance.
Configuration
Not that a member of the audience should tell the players what to do... but there is a small configuration section that can be opened and allows to modify the length of the performance, and whether to actively emit sound.
These options are there for debugging purposes and you are encouraged to not tinker with them as they will cause a different piece to be performed, but you can tinker with them if you want to, because it's your device and you decide what to listen to.
Just make sure your device's volume is not very loud, as you might get a lot of noise blasted through the speakers otherwise!
A little bit more about the players
A classic Object Oriented Programming (OOP) hierarchy is observed as the code was written aiming to reuse as much as possible.
We start with a Base
player, which is in turn extended by two categories of players: those that use an AudioContext
based solution, and those who generate audio data directly and output it to an <audio>
element; both categories contain multiple players, some building upon other types of players.
The UI is a web component which is instantiated per player. The UI interacts with the players by calling their play
and pause
methods. OOP makes this part very straightforward to implement as it defines a "shared API" for the players.
The hierarchy of players
- Base
- No OP
AudioContext
(and nothing else)AudioContext
withAudioWorklet
OfflineAudioContext
- Generate WAV
- Generate WAV with Web Worker
The players, in detail
0. Base
Keeps track of time, provides pausing, resuming and detecting end of performance functionalities.
It can also attach to an UI element, and update the element with the current playing time and status.
1. No OP
Does nothing apart from what the base player does.
2. AudioContext
An AudioContext
is created, but no nodes are connected to it.
3. AudioContext
with AudioWorklet
An AudioWorklet
node is used to generate audio data with an instance of SilenceProcessor
, which extends AudioWorkletProcessor
.
4. AudioContext
with OfflineAudioContext
Audio is rendered upfront using an OfflineAudioContext
, then played through an AudioBufferSourceNode
in the AudioContext
created by the parent class.
5. Generate WAV
The audio data is generated upfront, then played through an <audio>
element.
Depending on the configured length of the performance, this could considerably block the UI thread, making the website unresponsive.
6. Generate WAV (with Web Worker)
The audio data is generated in a web worker rather than in the UI thread.
This takes slightly longer to initialise, but it won't block the UI thread.
Discarded options
These are players I initially came up with but did not implement.
AudioContext
with ScriptProcessorNode
My initial list of ideas included a player implemented with ScriptProcessorNode
but since then a better implementation option has been standardised and shipped in browsers, so I never wrote that player. Instead, the AudioWorklet
version exists.
MediaRecorder
I was not deeply familiar with the MediaRecorder API at the time I sketched out my ideas, so I wasn't super sure of how it would work in practice.
When I started implementing the various players it became evident to me that it would be a bit futile to implement a MediaRecorder
based version as I don't think you can render streams offline, just as you can render audio files with OfflineAudioContext
: we would need to first spend the performance time generating the stream so the recorder would record it, and only then when finished recording and the audio file was generated we would use this file with an <audio>
element or another AudioContext
, which would enable the user to play the generated stream. That sounded a little bit too performative, so I decided not to pursue this version.
Using other APIs or combination of APIs
I could have also built players that did things such as create oscillator nodes but also play them with a gain value of 0. This would be like having a drummer bang a drum outside an anechoic chamber, and record the performance from inside the chamber. It could be done, but I didn't, because although the end result is on the surface similar, I felt that this line of players veers off from the initial spirit as they would be actively generating sound that we then mute.
There might also be other ways that don't directly involve <audio>
elements or Web Audio APIs and which I have not explored, like for example generating an image in a canvas, extracting the colour values out of the canvas element and converting those into an array of data which could be used to create an audio file or ArrayBuffer
... but I felt that the point has been proved already, as this was very similar to the existing players, and it just added extra layers of complexity around them without adding a lot of interest.
There might also be other APIs that I don't know of yet or that do not exist yet that could enable new interpretations of this piece in the browser. I might revisit the project in the future.
Why build this?
One morning almost ten years ago, I was having breakfast over a reconverted sewing machine table (sans machine, that is) in the former Black Sheep Coffee shop in Charlotte Street in London—a narrow, dark ground floor unit with mismatched floor tiles—.
As usual, I was scanning Twitter to see if there was anything relevant that had happened overnight and that needed replying to (something that I did as part of my devrel job!). Then I spotted an interesting Twitter discussion about 4'33" and JavaScript that sparked the lightbulb in my brain and caused me to get my notebook out and start writing down all the ways in which the piece could be implemented in JavaScript.
I had been working a lot with audio and Web Audio stuff at that moment, so I'm not surprised in the slightest that my brain was keen on coming up with all sorts of solutions.
And then... I didn't do anything about it.
But the thought of the project would come back to me from time to time. And I still didn't do anything about it.
Until now!
In which I have built it because I am tired of thinking about building it 😆
Who?
But who prompted the idea? I hate it when I don't remember these things!
I consulted my Twitter archive to locate the thread. I more or less knew this happened in 2015 or maybe 2016... Thankfully, I didn't need to read all the tweets of that period because I still have that notebook and I found the notes!
The notes weren't dated as I just seem to have written them down so quickly, no time for formalities, but there was a date on the next page, so that helped me narrow the timeline down: no later than November 2015.
With that, I could find my side of the conversation in my archive, and who was I tweeting to, but as this is only an archive of my tweets, the full exchange is incomplete, and the tweets from the other participants, including the initial tweet that started the thread, are missing.
However this was a short interaction so I can reconstruct the context (at least from my point of view):
Darius Kazemi started the original thread, and Jenn Schiffer retweeted it. And then I read it, and enjoyed the idea, and excitedly responded with a Web Audio based version:
simplified to:
`new AudioContext();`
for performance 😜
And you know the rest!
Belated thanks to Darius and Jenn for the initial prompt 😃