# MuBu
MuBu (multi-buffer) is a Max toolbox for multimodal analysis of sound and motion, sound synthesis and interactive machine learning. It allows to create interactive gesture-based sonic systems, and it is also the base for the CataRT system for Corpus-based Concatenative Synthesis.
# Download and Tutorials
- Mubu is available in the Max Package Manager and from the Ircam Forum (opens new window)
- MuBu-related video tutorials (opens new window)
- Video tutorials patches @IrcamForum (opens new window)
MuBu can be used in different typical use cases:
# Recording, Playing, Analyzing and Visualizing Multimodal Data
- audio and audio descriptors
- sensors data and motion descriptors
- MIDI
- temporal markers
# Real-time Processing of Audio and Sensor Data
- filtering
- segmentation
- computing descriptors (pitch, timbre, FFT, MFCC, wavelets, statistic)
# Interactive Machine Learning
- KNN (k-nearest neighbours search)
- PCA (principal component analysis)
- GMM (Gaussian mixture model recognition), GMR (regression)
- HMM (hidden Markov model recognition), XMM (regression);
- DTW (dynamic time warping)
- Gesture following (GF) and Gesture Variation Following
# Interactive Sound Synthesis
- Granular synthesis
- Concatenative synthesis
- Additive synthese
License: Forum (Toolbox distributed freely, proprietary code)
← CataRT →