Series·

Did You Know? 10 MusicTech Facts That Will Surprise You

A collection of fascinating facts about music technology, standards, and innovations that shape how we create, share, and experience music. Part 2 of an ongoing series.
Did You Know? 10 MusicTech Facts That Will Surprise You

Welcome to "Did You Know?" - MusicTech Edition! A series where we share fascinating facts, surprising innovations, and lesser-known aspects of music technology. From industry standards to accessibility breakthroughs, these are the things that make music technology truly remarkable.


1. MusicXML Makes Sheet Music Accessible to Blind Musicians

Did you know? MusicXML, the universal standard for digital sheet music, can be converted to music braille notation, enabling blind and visually impaired musicians to read musical scores independently.

Projects like FreeDots and the music21 Python library provide tools for this conversion. A sighted arranger can create a score in Finale or Sibelius, export it to MusicXML, and a blind musician can convert it to braille notation within minutes.

This wasn't possible before digital standards - blind musicians had to rely on specialized transcribers who would manually convert printed scores to braille, a process that could take weeks for a single symphony.

Impact: Over 2 million blind and visually impaired people worldwide can now access the same musical scores as sighted musicians, opening doors to music education and professional performance that were previously closed.


2. MIDI Is Over 40 Years Old and Still Industry Standard

Did you know? The MIDI (Musical Instrument Digital Interface) specification was released in 1983 and remains virtually unchanged as the backbone of music production.

Created through unprecedented cooperation between competing companies (Roland, Sequential Circuits, Oberheim, and others), MIDI was designed to let synthesizers from different manufacturers communicate. Four decades later, it still connects instruments, controllers, and software worldwide.

Fun fact: MIDI files are incredibly small - Beethoven's entire 9th Symphony as MIDI is about 100KB. The same recording as audio would be 700MB.


3. Spotify Analyzes Every Song's "Danceability"

Did you know? Spotify calculates audio features for every single track in its library, including:

  • Danceability (0.0 to 1.0) - How suitable for dancing based on tempo, rhythm stability, and beat strength
  • Valence (0.0 to 1.0) - Musical positiveness (high valence = happy, cheerful)
  • Energy (0.0 to 1.0) - Perceptual intensity and activity
  • Speechiness (0.0 to 1.0) - Presence of spoken words

These features are publicly available through Spotify's Web API, enabling developers to create playlists based on mood, energy levels, or even heart rate from fitness trackers.


4. The Music Industry Has Its Own "Language" for Data Exchange

Did you know? DDEX (Digital Data Exchange) is a consortium that creates standards for how music companies communicate. When you release a song on Spotify, Apple Music, or any major platform, DDEX XML files carry the metadata.

There are different DDEX standards for different purposes:

  • ERN (Electronic Release Notification) - For releasing new music
  • DSR (Digital Sales Reporting) - For royalty statements
  • RIN (Recording Information Notification) - For studio session data
  • MLC (Musical Works Licensing) - For publishing rights

Without DDEX, every platform would speak a different "language," making global music distribution a nightmare.


5. Your Favorite Song's Key Was Probably Detected by AI

Did you know? Services like Beatport, rekordbox, and Mixed In Key use machine learning algorithms to detect the musical key of songs with over 95% accuracy.

This technology analyzes the audio's frequency spectrum, identifies the dominant notes, and matches them against key profiles. DJs use this to create harmonically compatible playlists - mixing songs in related keys creates smoother transitions.

The algorithms have become so accurate that they often outperform trained musicians in blind tests, especially for songs with ambiguous tonality.


6. ISRC Codes Have Tracked Every Recording Since 1986

Did you know? Every commercially released recording has a unique 12-character identifier called an ISRC (International Standard Recording Code).

Structure: CC-XXX-YY-NNNNN

  • CC = Country code
  • XXX = Registrant code
  • YY = Year of reference
  • NNNNN = Designation code

Over 100 million ISRCs have been assigned. When you stream a song, the ISRC is how streaming platforms identify exactly which recording to pay royalties for - even if there are 50 different versions of "Happy Birthday."


7. MuseScore Has More Users Than Finale and Sibelius Combined

Did you know? MuseScore, the free and open-source notation software, has over 10 million users worldwide - more than the commercial giants Finale and Sibelius combined.

The software is entirely community-driven, with contributions from musicians and developers globally. Its companion site, musescore.com, hosts millions of user-created scores that can be viewed, played back, and downloaded.

Why it matters: Professional music notation software costs $300-600. MuseScore democratizes music education by giving everyone access to publication-quality engraving tools for free.


8. The Loudness Wars Are Officially Over (Thanks to Standards)

Did you know? Streaming platforms now normalize audio loudness, effectively ending the "loudness wars" where albums were mastered as loud as possible to stand out on radio.

All major platforms use LUFS (Loudness Units Full Scale) normalization:

  • Spotify: -14 LUFS
  • Apple Music: -16 LUFS
  • YouTube: -14 LUFS
  • Amazon Music: -14 LUFS

If you master a track at -8 LUFS (extremely loud), the platform will turn it down. If you master at -18 LUFS (very dynamic), it gets turned up. The result? Dynamic, well-mastered music no longer loses to "loud" masters.


9. AI Can Now Separate Any Song Into Individual Stems

Did you know? Tools like LALAL.AI, Demucs, and Spleeter can separate a mixed song into individual stems (vocals, drums, bass, other) with remarkable quality.

This technology uses deep neural networks trained on thousands of songs where the original stems were available. The AI learned to recognize which frequencies belong to which instruments and can now "unmix" songs it has never heard before.

Use cases:

  • DJs creating acapellas for remixes
  • Musicians learning parts from recordings
  • Remastering old recordings where original tapes are lost
  • Karaoke without the cheesy MIDI backing tracks

10. Web Audio API Turns Every Browser Into a Synthesizer

Did you know? Modern web browsers include a complete audio synthesis engine accessible through JavaScript. The Web Audio API can:

  • Generate waveforms (sine, square, sawtooth, triangle)
  • Apply filters, compression, and effects
  • Analyze audio in real-time (FFT, waveform visualization)
  • Create spatial 3D audio
  • Process microphone input

No plugins needed. Tools like Tone.js wrap this API to create full-featured DAWs that run entirely in the browser. You can build instruments, effects, and even complete music production tools using just JavaScript.

// Create a simple synthesizer in 5 lines
const ctx = new AudioContext();
const osc = ctx.createOscillator();
osc.frequency.value = 440; // A4
osc.connect(ctx.destination);
osc.start();

What's Next?

This is Part 2 of an ongoing series exploring the fascinating world of music technology. Each installment will bring new facts, innovations, and insights from the intersection of music and technology.

Know a surprising MusicTech fact? Let us know - we might feature it in a future edition.


MusicTech Lab builds software for the music industry. From DDEX integrations to audio analysis tools, we've seen the technology behind the music. Contact us if you're building something in the MusicTech space.

Need Help with This?

Building something similar or facing technical challenges? We've been there.

Let's talk — no sales pitch, just honest engineering advice.