How This Works

Most of this stream is composed from two applications– a radio receiver and a music synthesizer/sampler, both running on raspberry Pi hardware.  The receiver is a modified open-source WiFi “sniffer”, designed to monitor a channel of 2.4GHz WiFi.  It detects and tabulates network packets by their associated MAC addresses to send messages by OSC (Open Sound Control) to the music synthesizer/sampler to trigger the sound production.

The synthesizer/sampler software was developed from SuperCollider to translate tabulated data into sound.  At this moment the WiFi activity of my neighborhood triggers sampler and synthesizer sounds, live-coded SuperCollider synthesizers, or MIDI devices.

 As triggered patterns play, the user may set, note length, rests, key or mode. Several instances of the synthesizer software can play on a single Raspberry Pi.  Any MIDI or OSC networked device could also be triggered or synced with this software.

Listening:  The background pattern of WiFi-triggered sound, what my neighborhood generates, is best heard at night US EST.  During commuting hours– 0600 – 1000 EST and 1600 – 1800 EST, a recognizable change of activity can be heard here.  A nearby, busy roadway provides regular new triggers from the WiFi of the cell phones of passing motorists.  In the range of this system, is also a commuter train providing suddenly bursts of potential triggers from the router-yearning cell phones of its passengers.

(03/24) I had my first music performance since the ’80’s this weekend.  This followed a couple of weeks of intensive work on this silly thing, with many new features added and nagging bugs resolved.  The performance: while the allotted 5 min was just enough to get warmed up, it was significant to me move forward, to get out and show this.  For myself, I look forward to exercising my social skills again.  For this community: the meetings are larger with each weekend.  There is hope to get out more.

(03/03/24) This month was dedicated to getting a performance rig together, with a deadline of playing at a gathering at the end of this month.  The touring system, as it were, is a modular synth with one raspberry Pi as antenna and sound player, with a laptop also playing an instance of this application.  This was a simplification of other ideas, but it still has a lot of moving parts put together before it can be ready to play at any time. As I rehearse, I have to wonder what anyone would want to hear.  Something with a beat?  That said, I can now control the tempo of all devices on the network, software and hardware from the application.  It’s also got the keyboard built in.  And the presets work, on the latest OS upgrade since 3 years ago.

(02/03/24) Increased Ableton integration (note changes of tempo and tune parts, from the custom scripting); increased web backend development.  Super-major revision and optimization for efficiency and speed (e.g. 60 GUI elements replaced by 3), with older development ideas reactivated, refined or repaired (e.g. custom presets created, saved and recalled).  A rudimentary looper is in the works.  Hardware for planned live performance is at the ready.  Stand by for new/old clips to loop on the alternative radio channel.  We have returned to the whimsy of writing song titles from a laptop text file that’s automatically uploaded.  That’s old-new.

(12/26/23) A stabler station ID, with time announced on the quarter hour.  A new, complicated script that reduplicates a little of an Ableton feature, cycle/random play through lists of MIDI instrument patterns, change BPM, which is broadcast to RPi clients to set note new gate lengths.

The biggest software change is a long-imagined reworking of the GUI– to reduce the number of interface elements and idling variables in the application, one element will serve all, basically.  The slider interface is now a grid that can be tapped to trigger very low-latency sound.  Also, sounds can be edited– pitch, gate, and ADSR.

I lied.  Also new and kind of a big deal is that the home radio receiver can send to this remote web server, which can now serve this stream to subscribing clients.  Thus, no need for running a RPi antennae to make noise with this thing.  A one-liner OSC request gets you the same triggers that run the home radio demo.  As I get the above GUI refinements cleaned up, I would be happy to share the synthesizer/sampler with anyone who would interested to contribute or try this.  Please feel free to contact me, if you are interested.

(10/23/23) The future will be the sound of one Pi clapping.  It’s also going to be a balance of modular and Ableton.   It doesn’t have to be, but this stuff work in so many different ways at once that it’s difficult to resist the temptation not to go whole hog.  Upgrade also afoot with a new GUI, a forthcoming graphic envelope editor, LFO & fancy pitch dewhickey.

(07/19/23) Ableton integration.  The network now triggers stuff on the mixer and VCV Rack triggers stuff in Ableton and the modular rig.   Modulation scripts change remotely clips of Ableton tracks.  The network can change the tempo and just about anything else, given enough time and interest.

(05/01/23)  New sampler presets, new GUI colors.  New efficiency for more room in display for more stuff.   What? For the radio– better mastering software, a new workstation & mixer.

Revisiting development of live video coupled with this sound, last investigated in 2020:

The following video is made only from synth sounds that were sequenced, played and recorded live, as the video was made.  A busy roadway behind the camera provided the network triggers.  This video was a test of cross-fading the video and sound from one scene to another, edited in the field on a free application.

This video was made from a rural cemetery, thus a quiet scene with few WiFi triggers in range as a baseline.  As a car comes down the road, a new voice was created and played by the software, usually before the car enters the scene.