# MuBu
MuBu
is a Max toolbox for multimodal analysis of sound and motion, sound synthesis and interactive machine learning. It allows to create interactive gesture-based sonic systems, and it's also the base for the CataRT system for Corpus based Concatenative Synthesis.
- Available in the Max package manager and from the Ircam Forum (opens new window)
- Video Tutorials @IrcamForum (opens new window)
MuBu
can be used in different typical use cases
# Recording, Playing, Analyzing and Visualizing Multimodal Data
- audio and audio descriptors
- sensors data and motion descriptors
- MIDI
- temporal markers
# Real-time Processing of Audio and Sensor Data
- filtering
- segmentation
- computing descriptors (pitch, timbre, FFT, MFCC, wavelets, statistic)
# Interactive Machine Learning
- KNN (k-nearest neighbours search)
- PCA (principal component analysis)
- GMM (Gaussian mixture model recognition), GMR (regression)
- HMM (hidden Markov model recognition), XMM (regression);
- DTW (dynamic time warping)
- Gesture following (GF) and Gesture Variation Following
# Interactive Sound Synthesis
- Granular synthesis
- Concatenative synthesis
- Additive synthese
License: Forum (Toolbox distributed freely, proprietary code)
← CataRT →