Audio Analysis
Audio analysis automatically detects the tempo (BPM) and musical key of your audio bounces, giving you searchable metadata without manual tagging. This makes it easier to find tracks that match your current project or fit a particular mood.
When you trigger an analysis, the system reads your audio file and extracts two pieces of information: the beats-per-minute tempo and the musical key with its scale (like “C major” or “A minor”). These values then appear in your track’s metadata, ready to filter and search.
What Gets Analyzed
Section titled “What Gets Analyzed”The analysis engine looks at your audio file and extracts:
- BPM (Beats Per Minute) — The track’s tempo, detected by identifying rhythmic patterns in the audio
- Key — The musical key and scale (for example, “G major” or “E minor”), detected by analyzing the harmonic content
These attributes update in real-time after analysis completes. You’ll see them appear in the track’s detail view and in any tag or filter interfaces.
How to Trigger Analysis
Section titled “How to Trigger Analysis”You can start an analysis directly from the activity panel when viewing a track:
- Select a track in your project grid
- Open the Tags & Metadata section in the activity panel
- Click the Analyze button in the Audio Analysis widget
- Wait 1–3 seconds for processing to complete
- The BPM and key fields populate automatically
For desktop app users, analysis runs locally on your machine. This means it’s fast, works offline, and your audio never leaves your device during processing.
Analysis Status Indicators
Section titled “Analysis Status Indicators”The widget shows the current state of analysis for each track:
| Status | Meaning |
|---|---|
| Ready | Track hasn’t been analyzed yet |
| Analyzing | Processing in progress |
| Completed | BPM and key are saved and available |
| Failed | Analysis encountered an error (try again) |
If analysis fails, you can click Analyze again to retry. The system automatically clears stuck processing jobs after a brief timeout, so you won’t get stuck in an “analyzing” state.
How Analysis Works
Section titled “How Analysis Works”When you click Analyze, the system processes your audio file through these steps:
- Read the file — The system locates your audio bounce file (WAV, MP3, FLAC, AIFF, M4A, and OGG are supported)
- Decode audio — The file converts to raw audio data the analyzer can process
- Extract features — Algorithms identify rhythmic patterns for BPM and harmonic content for key
- Store results — The detected values save to your track’s metadata
Desktop users benefit from local processing, which completes in 1–3 seconds. Web users get cloud-based processing, which may take slightly longer depending on file size and network conditions.
Why Use Audio Analysis
Section titled “Why Use Audio Analysis”Auto-detected BPM and key help you work faster in several ways:
Find matching tracks instantly — Filter your project library by tempo or key. Need a track in G minor for your latest beat? Search instead of scrolling through every file.
Organize by musical characteristics — Create smart collections based on tempo ranges (like “90 BPM vibes”) or keys that work together.
Save manual work — No need to tap out BPM yourself or guess which key a track is in. The system does it for you, accurately.
Mix and match confidently — Knowing a track’s key helps you layer sounds that harmonize. Two tracks in the same key will generally blend better than random combinations.
Supported Audio Formats
Section titled “Supported Audio Formats”Analysis works with common audio bounce formats:
- WAV
- MP3
- FLAC
- AIFF / AIF
- M4A
- OGG
If your bounce is in a different format, convert it to one of these before triggering analysis for best results.
Related
Section titled “Related”- Using Tags — Apply custom tags to complement auto-detected metadata
- Managing Tags & Categories — Organize your metadata with categories and hierarchies
- Musical Attributes — Learn more about how musical metadata works in your workflow