AGENT MADNESS
THE BRACKETBRACKET ENTRIESBEST OF THE RESTSIGN IN
PUBLIC ENTRY PAGE

BrainJam

BrainJam is an embodied AI agent that acts as a musical co-performer, using your real-time brain activity and muscle tension to jam with you live.

BACK TO BRACKET ENTRIESVIEW BRACKET
BrainJam
Builder
Eyyüb GÜVEN
Build Type
Creative Project
Lifecycle
Experimental build
Consensus Score
87.3
Region
REGION 2
CATEGORIES
ResearchAudio / VoiceVideo / Media
Go Deeper
Most AI music generators are passive tools driven by text prompts, but BrainJam is built to be a live artistic partner. Instead of typing, the human performer uses their actual physiological and cognitive states as expressive control channels. The system fuses three biological inputs in real-time: 1. EEG (P300): Reads the performer's visual attention to make discrete musical selections. 2. EMG (Muscle tension): Captures embodied movement to control expressive dynamics like volume and filter sweeps. 3. fNIRS (Cortical blood flow): Tracks slow-state cognitive engagement to modulate the AI’s harmonic tension and complexity over time. What makes this a true "agent" is the bidirectional feedback loop. The AI (powered by PyTorch/MusicGen) doesn't just generate music autonomously; it continuously adapts its musical proposals based on the human's biological feedback. Built with Python and Streamlit, it’s currently an open-source prototype for my cognitive science PhD research application, exploring the bleeding edge of human-AI co-agency. We are shifting AI from a simple "tool" into an embodied, responsive co-performer.
Stack Used
AI & Core Logic: Google Gemini API (PromptDJ) for semantic musical mapping, PyTorch for real-time sequence generation, and scikit-learn (LDA) for EEG P300 classification. Biosignal Processing: MNE-Python & SciPy (Signal processing), BrainFlow (Hardware abstraction for EEG/EMG), and Custom Modified Beer-Lambert implementations for fNIRS oxygenation tracking. Frontend/UX: Streamlit (Web Interface), Plotly (Real-time data visualization), and Custom SVG/CSS for the interactive instrument panels. Backend & Integration: Python 3.10+, NumPy, Pandas, and LSL (Lab Streaming Layer) for high-precision, low-latency cross-device synchronization. Generative Audio: MusicGen (Transformers) and MIDI-based symbolic synthesis for zero-latency feedback loops.