Ian Baxter
Sound Artist






Web Based Works




Sounds Heard

Scores and Strategies

Other Writings


Syrinx (the transfiguration of Donald Trump)

gathering birdsong 0%

MARCH 2023 - sadly this work doesn't play anymore. It was silenced when Trump was taken off twitter (although the custom handle version worked ok) and recently I've discovered that an essential element - twitter fetcher by Jason Mayes - no longer functions and is not likely to be fixed. Free access to the twitter API is also being cut so without a simple way to grab the latest tweet from a user this piece is dead. There's a lesson in here about making web-based art with multiple dependencies.

What you can hear is Donald Trump's latest tweet transfigured - sonified - into a unique chorus of birdsong.

Very early in my research into sonification I discovered that Twitter had an extensive API where tweets could be accessed (along with various pieces of meta data). This made it a prime candidate for some kind of sonification scheme (and indeed several other people had this idea before me). However, having cracked the problem of getting data out of the API the project lay dormant for many months whilst I agonised about what sound to link to the tweets themselves. I toyed with some ideas such as encoding 26 letters of the alphabet as a 26 note scale and playing a tweet as a melody, or transforming the tweets into morse code. None of these ideas seemed particularly inspiring. I couldn't find a 'hook'.

Sometime in August 2017 I hit upon the idea of literally turning twitter into birdsong (humbly placing myself in a long tradition of using birdsong in music) and began a project called "A Book of British Birds". The scheme of this piece is to create a huge, virtual dawn chorus using 250 bird species identified on www.british-birdsongs.uk. For that piece a twitter steamer listens for references to birds (including false positives) and plays the corresponding recording from xeno-canto.org - an incredible creative commons resource of bird recordings.

This web based work is a spin-off to that project. As with many of these works exploring the Web Audio API as means of delivering sound art, I wanted something portable, that could be listened to anywhere, so the methodology is different.

I chose Trump as perhaps the worlds most notorious and egregious tweeter and a prime candidate for turning words into something more pleasant.

And despite some skeptisim about interactive sound art I've created another version where any twitter user's tweets can be sonified.

A few acknowledgements

The reading of twitter is facilitated by Jason Maye's twitter fetcher  - an invaluable javascript implementation which reads tweets on the client side. I wish I understood how it works and I'm glad that it does.

The audio element is based on adapting Tero Parvenien's excellent introduction to the Web Audio API for musical (and sound art) applications. Tero was also kind enough to answer my queries when I couldn't get my code to do exactly what I wanted.

All sounds you can hear are from the xeno-canto archive (602 species identified by www.bird-sounds.net), I salute these field recordists genoristy in sharing their sounds by creative commons. Full credits are here.

To gather sound from another website's content, I depended first on proxying using CORS-anywhere by Rob Wu and latterly https://corsproxy.io/

Taken together there's quite a lot of dependencies - not least how long Trump will continue to tweet. Or how long twitter will last.

Sheffield, September 2017

More detail on the methodology

Looking at the methodology behind this piece I'm reminded more of cryptography than composition, although I place it in the Cagean tradition of arriving at a schema which translates the output of a chance process I can't control (in this case, tweets) to musical parameters (a selection of sound samples).

Roughly, the method is as follows

Each tweet is made lower case and stripped of punctuation and other non-alpha characters (hashtags, at signs)

Each tweet is then split into individual words.

Repeated letters are removed to identify the unique letters in that word are identified. For example, letter becomes l e t r

Each letter refers to a column position 1-26 in a grid of the sound samples (1-602)

Each word is a row in this grid. In theory 70 rows long. This would represent a 140 character tweet of 70 single letters with spaces.

AS the sample number wrap at 602, we start at 1 again at 'e' in word (row) 24, meaning the grids are offset 'a' does not always = 1)

So, for example if our word letr appears as the first word of a tweet it would trigger birds 12, 5, 20 and 18 to be included in the chorus.

These samples are spread throughout the stereo field.

and so on for each word in the tweet. An average tweet produces something like 80-100 different bird sounds to be included in the chorus.

If you open the console you can see the workings as the tweet is enciphered.

After some testing I made a small edit for android and other mobile devices where the total playing tracks is limited to 50, as it seemed Trump's verbosity got the better of those platforms and triggered an error (I think) with the number of concurrent buffers.