This is a live stream of an ongoing software development project begun in 2019 focused on real-time sonification of environmental data.
The “wlan” of wlanMusic reflects that some of the samples and synths playing in the featured audio stream are triggered live by WiFi packets sniffed from the immediate locale. Since network WiFi packets contain MAC address information, it is possible to assign a sound or a note value to an address and then to alter note values as other MAC addresses also trigger sounds and notes. Thus chord patterns drawn from selected conventional modes might play in cycle, expressing each MAC address with notes of a chord, creating of it a melodic line in a larger sound. Squelching the busiest MAC addresses from a sorted list becomes important in crowded environments or when a sparer sound is envisioned. Reassignment of sounds/notes to MAC addresses via drag and drop interface also permits some contouring of the overall sound. Importantly, the expression of an assigned sound/note for a MAC trigger might also be multiplied dynamically.
Samples processing includes several means of slicing samples and for levels of granular processing. Any sound file or lists of sound files may be added to the application via drag and drop, but curated sets of sounds are also available via menu. Musical notes may be expressed simultaneously with the samples by a growing list of synths native to the software, or by software and hardware linked via OSC or MIDI. Some effects to the software like dynamic, tempo-synced delay and reverb might be configured to any sound. A provision is also made to save a playing configuration as a preset in a song set, for easy recall during performance.
As the expression of WiFi patterns of a locale is the definition of this project, portability and ease of use have been important considerations in continued development. Thus a wlanMusic unit might run on a raspberryPi powered from a portable USB battery pack and it would connect to other hardware via Ethernet and USB. In a common configuration two instances of this software might run synchronized on a device, but it is also possible to link remote devices into complex configurations, such as the case of a local device playing remote locale sound information along with the present locale. Thus one device could play several locales (e.g. LA & Tokyo played in Salem, MA).
Locale differences and effects are remarkable from vehicles moving through varying population and traffic densities, or at various times of the day, such as the contrast between morning commute and late at night. Remote, rural graveyards might contain only the transient WiFi activity generated by the cell phones of passing driver and passengers. A nearby highway might provide consistently a dozen new “voices” every second over the background of the WiFi traffic of an otherwise quiet neighborhood, providing a strong, predicable dynamic over 24 hours.
If a change of tempo can be discerned in the present playing configuration, it is related to the automatic adjustments a complementary sequencer will make to the overall tempo of configured auxilliary instruments from an analysis algorithm embedded in the wlanMusic software. Other, environmental data such as weather info has also been configured to influence tempo and perhaps other parameters of sound production.
The software is written in SuperCollider. Mixdown is in Ableton Live before upload to this Icecast server for broadcast.
WiFi Player (2024/10)
Sequencer (2024/10)
News: (8/2)
Recent development includes work on an experimental programmatic means of creating titled audio tracks on the fly. As a desired broadcast stream sound is achieved, and an inspired title for the track is imagined and saved in a text file, a remote recording of 10 minutes of the audio stream is then activated. The audio file is saved automatically to an index of recordings, which are reloaded into the ongoing loop of recordings playing in the More section of the front page of this site. Eventually a listing of the titled recordings will be listed for individual listening, either here, or on another streaming platform.
Aesthetics: This live steam is usually unmanned 23 of 24 hours. A set-up is usually stabilized daily to degrade or develop until a system crash (silence) or the next human intervention, with an aim just to see what happens in the environment and as a general test of features and reliability of the software. Needless to say, this is not trying to be anything, so it can’t fail. Indeed, one could say this exercise is the result of a life-long interest in chance, noise, heuristic listening, cognitive science, creativity, problem solving, perception, consciousness, subjective experience, science, theory, systems, surrealism, Pataphysics, humanism, respect for dignity, humor, provocation, density and silence .