A friend, whose band languishes in obscurity, told me once that this is a great time to be alive, if you're a fan of music. To badly paraphrase Churchill, never, in the history of recorded music, has so much, been available, so easily, and so cheap.
On the musicians’ side, if you can ignore that you'll probably never get paid, the price of the equipment required to produce a roughly studio-quality track has fallen dramatically. If you've got a laptop and some instruments, you're 80% of the way towards owning your own studio and there's no shortage of places that can host this music for free and stream and sell it.
This has led to a new problem: millions of tracks being uploaded to billions of potential listeners, but with precious little to connect the two.
This isn't a problem no-one knows about, of course. There are lots of musical recommendation engines available and many are built into the streaming services themselves. But (as far as I know), they all work the same way: a user listens to a track, a user listens to a second track, and the service makes the assumption that these two tracks are somehow similar. This is technically known as ‘collaborative filtering’, but I think of it as ‘getting your users to do the work for you’.
No accounting for mood
My music tastes aren't hugely varied, but span from Atari Teenage Riot to Philip Glass. Most music recommendation engines offer a ‘thumbs up’ or ‘thumbs down’ approach, I first saw at Last.fm in 2002. There's no button for ‘I like this, but I'm just not in the mood for it right now’. Or ‘have you got something like this, but 20% more furious?’.
If only the software somehow knew your mood and took account of this, in the music it played you.
Companies such as Google are probably getting to a place where they could estimate your emotional state just from your search terms, general internet behaviour and, I dunno, how you move the mouse pointer. This thought gives me the heebie-jeebies, but then I'm old, so I still care about such arcane concepts as ‘privacy’.
And that's not even considering that when you're in a bad mood, perhaps a bit of Philip Glass is what you need to hear, even as you reach for the comforting nostalgia of Slayer.
The other issue with using the ‘users who liked this song also liked...’ methodology is you'll only hear tracks which someone else has already listened to, and allowed the algorithm to slot into a niche in its database. It tends to push you towards mediocrity, bland songs which the most amount of people already like. "You like Sonic Youth? You'll love REM and The B-52s!" Any aspiring DJ wants to hear songs no-one else is listening to, so that they can bring them to the ears of their followers.
If musical popularity is like a river system, collaborative filtering tends to push users downstream to where the water flows faster and deeper, rather than letting them explore the tributaries.
But what if there was software which could listen to new music and work out what tracks it's most similar to? This is a similar issue to a related question: how can we work out if a record will be popular or not?
The science of hits
Some record companies have been using software to detect if a song will be a hit since 2003.
This software, confusingly called Hit Song Science, uses machine learning (this is disputed by some) to give songs a percentage chance of becoming a hit. It's a depressing thought: that a band would go to a record company, play their record to a robot who would decide if humans would like it in sufficient number to justify their advance or not. In reality, it doesn't work quite like that.
But what is machine learning anyway?
Usually, when we program, we tell the computer explicitly how to deal with any input. If an input comes up which they don't understand, they'll generally throw an error.
Machine learning allows computers to examine a whole heap of information to try and identify patterns. Then, using these patterns, it can make predictions about any new data. It was one of the techniques used in a recent documentary by Richard Fairbrass look-a-like Professor Armand Leroi. Leroi was attempting to predict what would make a hit record but also explore how our music taste has changed and broadened over time. While (spoilers!) Leroi doesn't quite manage to predict a hit, his analysis is much more successful at identifying music which is related - and this is all done without any collaborative filtering.
The traditional method of establishing a musical genre is to get a music journalist to group together a bunch of bands who share a geographic location or sartorial preference and give them a sarcastic name. Usually the band dislikes this name, but are forced to adopt it, in exchange for publicity. What if software which knows nothing about how tight the lead singer's trousers are had to group music into infinite genres based on the actual music which was made? And what if you could be played a selection of neighbouring music, some of which had never been heard before? That would be a beautiful thing.