This stream is a demonstration of music composed from two applications– a radio signal receiver and a music synthesizer/sampler. Each operate on Raspberry Pi (RPi) devices running real-time versions of Linux and network via Ethernet over a dedicated switch for lag-free, live play.
The receiver is a modified open-source application of a promiscuous wireless interface (aka “sniffer”) designed to monitor selected channels of 2.4GHz WiFi signals. It detects and tabulates TCP/IP packets by their associated MAC addresses, to send messages by OSC (Open Sound Control) protocol to the music synthesizer/sampler to trigger the sound production.
The synthesizer/sampler software was developed from SuperCollider to translate any tabulated data into sound. In this application of the software, WiFi activity triggers sampler and synthesizer sounds, live-coded SuperCollider synthesizers, or MIDI devices. Digital audio streams are transmitted over networks via the Linux JACK audio framework.
This software creates and maintains accounts for MAC addresses and edits parameters of music play. At its core it maps with a minimum of latency a MAC (a unique network name of a wireless device) to the same instrument each time the device sends a signal. The sounds may be generated from digital audio files arranged in pre-set banks, from drag-dropped files on the application, from outboard MIDI synthesizers or they may be live-coded with Supercollider synths, alone or simultaneously with any combination of the other modalities. MAC-to-instrument mappings can be been translated over networks by OSC for remote play with alternative instruments. A mixer interface of solo/mute buttons and volume sliders for each instrument allows dynamic expression of any sound. MAC signal patterns may be combined by drag and drop on an instrument to create new rhythms.
A user may also modify the expression of each instrument-channel with pre-note rests and set the duration of play of any instrument. As root, mode and note music values of synthesizers are set with a knob interface, a chord progression may be selected from a menu, or a custom sequence may be live-coded to cycle through chords. A user may save and recall any configuration of all on-screen parameters into banks. Auxiliary automation software enables synchronized cycling of song parts among any number of instances of the synthesizer software. It’s also possible to set the chords of any number of software instances from a MIDI keyboard. From a single workstation, a user may create from many RPi, local or remote, triggered simultaneously from a variety of network environments.
For the purpose of this live internet radio stream, an antennae/receiver device triggers 4 or more RPi synthesizer devices. Each synthesizer device runs up to 4 instances of the synthesizer software and each might trigger an additional outboard MIDI device. JACK audio networking links streams produced from RPi to a 6th RPi device to mix to a stereo stream. This stream is sent to a DAC to mix with analog signals of outboard MIDI devices before converting to a digital stream to transmit to an Icecast or Shoutcast server for streaming broadcast.
All RPi devices run almost entirely from RAM at less than 50% CPU capacity, requiring no extra cooling resources than a heat sink and free air flow.
Listening: A background pattern of WiFi-triggered sound, what the neighborhood generates, is best heard at night US EST. During commuting hours– 0600 – 1000 EST and 1600 – 1800 EST, the new transient WiFi addresses will trigger a recognizable change of activity of the overall sound, breaking up the background pattern. A nearby, busy roadway provides regular new triggers from the WiFi of the cell phones of passing motorists. In the range of this system, a commuter train passes periodically, providing suddenly dozens of new triggers to create a notable storm of activity from the dozens of cell phones onboard.
(10/23/23) The future will be the sound of one Pi clapping. It’s also going to be a balance of modular and Ableton. It doesn’t have to be, but this stuff work in so many different ways at once that it’s difficult to resist the temptation not to go whole hog. Upgrade also afoot with a new GUI, a forthcoming graphic envelope editor, LFO & fancy pitch dewhickey.
(07/19/23) Ableton integration. The network now triggers stuff on the mixer and VCV Rack triggers stuff in Ableton and the modular rig. Modulation scripts change remotely clips of Ableton tracks. The network can change the tempo and just about anything else, given enough time and interest.
(06/01/23) Some new modular modules and considered patching. With each iteration new insights gained.
(05/01/23) New sampler presets, new GUI colors. New efficiency for more room in display for more stuff. What? For the radio– better mastering software, a new workstation & mixer.
(04/19/23) Software revised for efficiency and improved MIDI integration. Digital mixer added. Reorganized the modular system with new modules. The software triggers modules via MIDI now, setting 1V tones and turning stuff on and off.
(03/25/23) Hardware update pending with major software revision.
(03/06/23) Testing a new/old 8-channel interface, which might allow more options in mixing outboard audio gear.
(02/24/23) A portable live rig is in the works (one RPi does it all), triggering outboard synths, including a portable, modular processing rack.
Revisiting development of live video coupled with this sound, last investigated in 2020:
The following video is made only from synth sounds that were sequenced, played and recorded live, as the video was made. A busy roadway behind the camera provided the network triggers. This video was a test of cross-fading the video and sound from one scene to another, edited in the field on a free application.
This video was made from a rural cemetery, thus a quiet scene with few WiFi triggers in range as a baseline. As a car comes down the road, a new voice was created and played by the software, usually before the car enters the scene.