This stream is a demonstration of music composed from two applications– a radio signal receiver and a music synthesizer/sampler. Each operate on Raspberry Pi (RPi) devices running real-time versions of Linux and network via Ethernet over a dedicated switch for lag-free, live play.
The receiver is a modified open-source application of a promiscuous wireless interface (aka “sniffer”) designed to monitor selected channels of 2.4GHz WiFi signals. It detects and tabulates TCP/IP packets by their associated MAC addresses, to send messages by OSC (Open Sound Control) protocol to the music synthesizer to trigger the sound production.
The synthesizer software was developed from SuperCollider to translate any tabulated data into sound. In this application of the software, WiFi activity may trigger sampler and synthesizer sounds, live-coded SuperCollider synthesizers, or sounds from external devices. Digital streams are transmitted over networks via the Linux JACK audio framework. Generated MIDI and OSC information produce analog audio streams processed through digital audio interfaces (e.g. DAC) that are mixed with digital digital streams.
This software creates and maintains accounts for MAC addresses and edits parameters of music play by GUI. A user may combine the trigger patterns of several MAC addresses on an instrument by way of a drag-and-drop screen action to create a novel sound pattern. Sounds may be triggered from digital audio files arranged in pre-set banks, from drag-dropped files on the application, from outboard synthesizer presets or they may be live-coded with Supercollider synths, alone or simultaneously with any combination of the other modalities. With a GUI reminiscent of conventional mixing hardware, solo, mute and volume sliders of individual instruments allow a user to alter the expression of sounds generated by MAC address triggers.
A user may modify the expression of each instrument-channel with pre-note rests or specify its relative note duration. Synthesizer sounds and sampler playback may be edited with a knob interface to express root, mode and note music values. A user may also select a chord progression from a menu or live-code a custom sequence to cycle through all of the instruments at desired intervals. A user may save and recall any configuration of all on-screen parameters into banks of presets organized as songs. A separate automation software allows cycling though song parts at set duration while synchronizing among any number instances of the synthesizer software. As a scalable interface, a user may thus control any number of RPi, local or remote, from a single workstation.
In the case of this radio demonstration, 4 RPi are usually served by a separate receiver. This is to say that a single MAC packet captured by the receiver could trigger 32 sounds, as 4 devices x 4 software instances x 2 (synth & sample) = 32. By grace of Linux JACK MIDI, a synthesizer instance may also trigger any number of outboard MIDI devices such as a modular synthesizer or sampler. Networked OSC-enabled devices may also receive and process performance data, including over extended, wide-area networks (WAN). Thus a “melody” could be created from the pattern of WiFi triggers produced remotely and transmitted to instances of synthesizer software, to play with a pattern generated locally.
For the purpose of this live internet radio stream, an antennae/receiver device triggers 4 or more RPi synthesizer devices. Each synthesizer device runs up to 4 instances of the synthesizer software and each might trigger an additional outboard MIDI device. JACK audio networking links streams produced from RPi to a 6th RPi device to mix to a stereo stream. This stream is sent to a DAC to mix with analog signals of outboard MIDI devices on a traditional hardware audio mixer before converting to a digital stream to transmit to an Icecast or Shoutcast server for streaming broadcast.
All RPi devices run almost entirely from RAM at less than 50% CPU capacity, requiring no extra cooling resources than a heat sink and free air flow.
As this project remains a work in progress, the current broadcast is reflective primarily of a daily review and test of new features and configuration of software, accompaniment and mixing. Thus, the actual running result is often a pad– everything at once. Changes to this configuration are made almost daily, usually after 5PM or 12AM EST.
(01/01/23) Current work includes integration of audio hardware with software development, managing new presets and increasing flexibility and ease of use. A new, live-coding interface allows for increased flexibility and collaboration in performance.