
“Humans are intuitive beings, and humans often aren’t able to describe how they’re able to do human things … I felt at the time that maybe I was violating a sacred oath in defining these things.”īilmes needed a model of rhythm that would allow him to compare a recorded human performance to a regular metrical grid. “When you’re learning to play music and you’re learning to play jazz, there’s a utility to intuitively understanding what it is about music that makes it human,” Bilmes tells me. This is the standard problem of artificial intelligence: How can we build a program that can duplicate a complex human task? How can we write an algorithm to mimic human instinct?

It’s simple to write a program that plays rhythmically flawless music, but the outcome is lifeless, sterile it doesn’t feel human. It’s where the human perception of music overlaps with numbers that an algorithm can understand, an attempt to teach machines to grasp music as we do.Īs a graduate student at MIT’s Media Laboratory, Bilmes was trying to model musical rhythm – and to teach computers to reproduce it expressively – by breaking it down into its component parts. The pioneers of “music information retrieval” - the science of categorizing and analyzing music computationally - expanded on his work to bring customized music to the masses.


In his thesis, “ Timing is of the Essence,” Bilmes presented a new concept he named the “tatum.” Sometimes described as a “musical atom,” the tatum represents the smallest unit of time that best corresponds to the rhythmic structure of a song. To understand what makes these services so innovative and ubiquitous today, it’s worth looking back at Bilmes’ insights, and at the musician whose acumen inspired the young scholar to explore the purpose of rhythm: Art Tatum, the jazz luminary who famously played piano like a one-man duet. Today, an innovation of his is contributing to smart playlists and automated music personalization at the two biggest players in the business: Pandora and Spotify.

When Jeffrey Bilmes wrote his master’s thesis at MIT in 1993, he couldn’t have known his work on music perception would help launch the world of online radio.
