Oculizer 
Over the past couple of years, I've been developing a DMX lighting automation system that creates real-time, music-reactive lighting. Oculizer uses machine learning to automatically predict and switch between lighting scenes based on live audio analysis. The system leverages EfficientAT, a state-of-the-art audio tagging neural network, combined with spectral audio features to understand both the semantic content and acoustic properties of music. This allows Oculizer to intelligently match musical moments to appropriate lighting scenes while using mel-scaled FFT to analyze frequency components and map them to DMX values through configurable scenes.
My hope is that this project is the precursor to developing systems that facilitate fruitful social interactions. Our brains are highly sensitive to the spectra of ambient light, sound, and odor in the environment, but the ways in which these factors impact our feelings and behavior often evade conscious awareness. Through a systematic research program of testing the relationships between these factors and human behavior, it's possible to build the knowledge necessary for us to optimize the spaces we inhabit towards connection, cooperation, and a really fun time.
The project is open source and available on GitHub.
Core Features
- Intelligent scene prediction using EfficientAT neural network embeddings
- Real-time audio reactivity using mel-scaled FFT analysis
- Dual-stream and single-stream audio modes for flexible setups
- Support for RGB lights, dimmers, strobes, and lasers
- Automatic scene switching based on audio content analysis
- Manual scene control with interactive grid-based browser
- Live scene switching and MIDI control support