AcousticBrainz: Making a hard decision to end the project

We created AcousticBrainz 7 years ago and started to collect data with the goal of using that data down the road once we had collected enough. We finally got around to doing this recenty, and realised that the data simply isn’t of high enough quality to be useful for much at all.

We spent quite a bit of time trying to brainstorm on how to remedy this, but all of the solutions we found require a significant amount of money for both new developers and new hardware. We lack the resources to commit to properly rebooting AcousticBrainz, so we’ve taken the hard decision to end the project.

Read on for an explanation of why we decided to do this, how we will do it, and what we’re planning to do in the future.

Why we’re doing this

When we launched AcousticBrainz, we had a few goals which we wanted to achieve with the project and collected data

  • Generate a list of musical characteristics of audio recordings, such as musical key and tempo (BPM).
  • Use the extracted data to automatically predict other musical characteristics such as instrumentation, genre, or mood of the music based on the current state of the art algorithms and models for music classification.
  • Provide a source of mathematical features extracted from audio which other people could use to build their own models to predict other musical characteristics

Unfortunately, a number of things happened with the data that we collected which made us decide that the quality of the data isn’t as useful as we had hoped

  • The musical key data that we were generating was accurate on some styles of music, but not on the full range of music that we collected in AcousticBrainz. The BPM tools work well on a wide range of music, but there are many recordings for which the predicted value is incorrect. The data that is generated by these algorithms is unable to indicate a confidence level of the predicted value, and so we are unable to determine which data we can trust.
  • Early on in the release of the AcousticBrainz data we determined that the existing models that we had for categories such as genre didn’t work very well, however further experiments that we performed to build new models showed that it was difficult to get good results that covered the full range of content in the database.
  • Right about the time that we released the AcousticBrainz data extractor, Deep Learning techniques for performing this kind of prediction started to become more prevalent. Unfortunately, the resolution of the data that we collect in AcousticBrainz is not enough to be used in this type of machine learning, and so we were unable to try these new techniques using the data that we had available in the database. The type of data that we made available meant that researchers and others who were working on this kind of task were not as interested in the data as we had hoped.
  • We spent some time introducing content-based similarity to AcousticBrainz, but when we used this data ourselves for generating similar / recommended recordings, it didn’t give good results.

Unfortunately, within the MetaBrainz team we don’t have the resources and developer availability to perform this kind of research ourselves, and so we rely on the assistance of other researchers and volunteers to help us integrate new tools into AcousticBrainz, which is a relationship that we haven’t managed to build over the last few years.

What we’re going to do next

Based on the current state of the data in AcousticBrainz, we don’t want to keep promoting it as an accurate representation of the music that has been analysed, therefore we have decided to stop collecting data. 

In the next month or so we will stop accepting new data submissions to AcousticBrainz. We’ll remove downloads for the submission tools, and modify the AcousticBrainz API to stop accepting new submissions. The rest of the API and other tools in the site will continue to work as before.

We’ll make a full dump of all data available in AcousticBrainz, so that if anyone wants to download and use it themselves, they will be able to do so. In early 2023 we will shut down the AcousticBrainz site.

What we’re planning to do in the future

Part of the initial goal of AcousticBrainz was to provide a way to characterise and organise the recordings that are in the MusicBrainz database. This is still something that we’re interested in collecting, and we have some ideas about how to integrate this into other MetaBrainz projects. We have a few current ideas about how we want to go about this:

  • Focus on user-provided tagging for music characteristics such as genre and mood/emotion. We have a good base for storing this in MusicBrainz, and plan to integrate new functionality into ListenBrainz to encourage the MetaBrainz community to help add more data. This data will be used in the new recommendation systems that we are starting to build into ListenBrainz.
  • Use some improved tools to compute specific musical characteristics. We have been reviewing some of the recent work in tempo estimation and are looking to see how we can integrate it with tools such as Picard so that we can allow people to compute these features if they need them, and help us confirm that the computed data is correct.

Importantly, this doesn’t mean that we are not interested in generating tools for music recommendation. On the contrary, our recent work has shown us that the data that we already have in ListenBrainz (user listening history), and data in MusicBrainz (metadata, relationships, links, and tags) give great results for the recommendations that we have started to build, and so we want to focus on improving and using this data going forward. Also, focusing only on one project, rather than two will actually allow us to reach these goals sooner.


Please leave a comment if you have any questions!

7 thoughts on “AcousticBrainz: Making a hard decision to end the project”

  1. While I agree with some points, I am not so sure about many others, specially the usefulness of the data collected.

    Some time ago I created a JS framework to retrieve similar recordings from user library to a track provided by the user (within Foobar2000):
    https://github.com/regorxxx/Search-by-Distance-SMP

    That along a JS genre & styles graph:
    https://github.com/regorxxx/Music-Graph

    And using harmonic mixing rules:
    https://github.com/regorxxx/Camelot-Wheel-Notation

    …provides impressive results, much more accurate than those based only on users’ listening history. Its strength relies specially in proper tagging and its application to any type of music. Models like the one on Spotify are heavily eurocentric, incapable of relating genres from different cultural groups. This can be seen specially in their “genre” based playlists. Their “mood” based playlists are also heavily biased by specific cultures.

    The JS framework provided uses mood, bpm and key data, along the genre graph to essentially relate any track -no matter their geographic origin- by their similarity. i.e. blues tracks are considered more similar to blues rock (eurocentric bias) or desert blues (afrocentric) than to jazz tracks. Spotify would only offer the eurocentric results. Another result of its design is it works on any artist; it doesn’t care if ‘XXX’ is an unknown artist from Haway… if it’s Folk, has a key and BPM (along other variables)… then it’s easy to find similar recordings. Listening history based models fail for unknown artists. Other AB variables, like timbre or acousticness can also be used.

    Expecting content based similarity would work without mixing that data with tags from the recordings itself makes zero sense. I don’t see how that has been seen as a failure. What did you expect? It’s obviously a failure if you focus content similarity that way, but I think merging the results from AcousticBrainz with users tags and Listening history would be a success. It’s not a matter of AcousticBrainz being a failure, but trying to use it as a standalone tool.

    Furthermore, data like BPM or Keys may be a bit error prone BUT… is there out there a better alternative? I don’t really get the point about it being unreliable. There is no software which calculates BPM without errors. I understand the need to improve those bits and also ensure there is a confidence value associated to them, but I don’t see how that’s related to AcousticBrainz at all. The same will apply to anything baked into Picard in the future.

    Hope in any case things like BPM, key calculation, moods and other musical characteristics are baked into Picard to give users easy access to the data… since that is the main strength of AcousticBrainz.

  2. Hi!

    >Expecting content based similarity would work without mixing that data with tags from the recordings itself makes zero sense. I don’t see how that has been seen as a failure. What did you expect?

    We didn’t test this in a vacuum — tried to combine the ANNOY similarity data with genres, tags and even rudimentary recording similarity and… nothing produced anything that was worthwhile. A part of the problem is that for some data classes the data might be 50% or even 70% accurate, but that doesn’t make the data suitable for use in a recommendation system. For a recommender to not produce an ipod-whiplash inducing playlist the accuracy needs to be over 90% and the mistakes need to be minor. These cannot be said of the similarity data (ANNOY) in AcousticBrainz.

    If you really think that AB has sufficient quality data to get a stay of execution, then please show us a reliable, working use case.

    And we’re not saying that we’re only going to rely on user listening data for all of this. Everything and anything is fair game for our recommendation efforts and we’re leaning heavily on MB data and are even working to add new features to MB in order to move this forward. First up: collecting moods from our users.

  3. I want to commend you for your courage in knowing when to throw in the towel. There’s often an instinctual pang of regret that comes with retiring a project – especially one that’s collected as much data as AB. It’s not at all easy to take a dispassionate evaluation and make the correct call to pull the plug. A soup with a handful of dirt in it is unsalvageable.

    For what it’s worth, I don’t think there’s much hope for ML-based recommendations given the current state of the tech zeitgeist and lack of creativity within it. The fundamental model always ends up directing people towards the top of the popularity pyramid – you start at a track from Nuggets and end up with the Beatles, you start with rare groove and end up at the Temptations, you start with George Onslow and end up at Beethoven. Great if you’re trying to sell product, not so great if you’re trying to foster an actual horizon-broadening intellectual development. It’s devoting whole racks of servers to perform the same function as simply reading a genre tag and spitting out the five most popular items that share that genre. Crowdsourced tagging only makes the problem worse, because it devalues eccentric input from the informed – if Richie Unterberger feels there’s a connection between the Beach Boys and Public Enemy, no matter how “ipod-whiplash”-inducing that may seem at first glance, I nonetheless value it far more than twenty nitwits insisting that Papa Was a Rollin’ Stone is just like TLC’s “Creep”.

  4. Hi,
    that is a very very sad notice to me personally and to projects I maintain myself and am involved with. I do understand and respect some of your concerns though.

    I want to share some personal experiences with AB and try to get over my frustrations of loosing it for good….

    I recently started to use and contribute to https://github.com/beetbox/beets and was (actually still am) working on a patch that allows AB data to be tagged to files with the already existing beets acousticbrainz plugin. We were in contact with “wargreen” who brought the feature into Picard and tried to be as compatible with Picard as possible to help create a standard of tagging AB data to files.

    Well I guess that all does not really matter anymore now :-/

    Another project, my personal pet project, also relies heavily on AB data: I am a DJ, I always preferred vinyl but certainly also do play with professional digital systems like NI Traktor and Pioneer Rekordbox. I wanted to bring typical features of those digital systems to vinyl DJs, that’s when I invented DiscoDOS: https://github.com/joj0/discodos

    It’s based on a Discogs based record collection but can be asked to match releases to MusicBrainz, and if existing there, certainly a connection to AcousticBrainz can be made. At the moment it uses it to fetch key and BPM data, thus the vinyl DJ has this information available in his collection even without having the music as a file as well. For me this was the killer feature and I think is quite unique.

    Anyway, enough stories, I do have a few questions:

    – Is it on the roadmap that the MusicBrainz API itself provides key and bpm data?

    – I get that you will use ListenBrainz for data like mood and let’s call it “subjective tagging” of music but what about the more “technical” things, which to me are key, bpm, chords, average_loudness and so on? You talk about some ideas, but you might elaborate, I am quite interested. Thanks.

    On that note: I don’t think AcousticBrainz data was that bad in the just mentioned areas (the low-level endpoint actually). Your bpm detection was great IMHO, and I remember when a breakcore DJ once told me that not even NI Traktor gets the BPM right for his tracks. He has to “tap it in” himself. You can’t be accurate 100% and from a classic DJ’s point of view, it doesn’t matter, at least IMHO. So in short, I think it was very usable and others are not really better.

    I use a commercial software called “Mixed in key” as well and it says they are the best in finding keys, but they are not perfect as well. For instance they tend to prefer minor keys over major ones often. Since I think the majority of electronic DJ music is in minor that might be a good idea but often is it’s just wrong anyway. AB on the other hand often tells the relative major key instead but technically speaking that’s completely irrelevant. A computer and even a person knowing music will see that Bb major is actually G minor and compatible anyway. So yeah, in short: AB is not that bad and pretty useable IMHO.

    – I don’t quite understand what the problem was with AB data? Was it more of the reason that people submit incomplete data or wrong data or are you talking about the algorithms themselves?

    – The essentia extractor is open source, right? The acousticbrainz server software is, as far as I can see, too. So it theoretically should be possible for others to create an AcousticBrainz relaunch using all your tools because everything is Open Source, right? (Sorry I didn’t do my homework thorougly, appologies if I could research this myself, thought asking is ok).

    – Have you heard of anyone who already wants to pick up the project?

    – Would a relaunch be possible from a licensing point of view?

    Thanks for all your efforts, appreciated!!! Still sorry to see you go, but I hope that the future brings solutions for projects that used AB for something else than recommendation systems. Thanks again for all the valuable things the MetaBrainz foundation provides to the Open Source music community! 🙂

  5. Thanks Jojo for your comments, and the link to your app.
    In response to some of your questions –

    > Was it more of the reason that people submit incomplete data or wrong data or are you talking about the algorithms themselves
    The problem that we saw were problems with the algorithms themselves. We always knew that we might get bad/incomplete data, and so we gathered duplicate submissions in order to try and mitigate this. The problems that we encountered mostly involved algorithms getting the BPM, key, etc clearly wrong and consistently wrong over duplicate submissions of the same recordings. It’s interesting to see your comment about how the algorithms appeared to be better than those in Traktor. Did the DJ comment on how often the AcousticBrainz data was wrong? I wouldn’t be surprised if the algorithm was quite good on that specific style of music, and less good on music which doesn’t have a clear strong beat.

    > The essentia extractor is open source, right? The acousticbrainz server software is, as far as I can see, too. So it theoretically should be possible for others to create an AcousticBrainz relaunch using all your tools
    Yes, 100%. With the software + forthcoming data dump, anyone will be able to take this and host the AcousticBrainz dataset. The data is CC0, so licensing-wise there won’t be a problem with this. We’ve not heard from anyone specifically who is interested in picking up the hosting of the site yet.

    Thanks again for your kind comments.

  6. Hi alastairporter,
    my story about the DJ complaining about Traktor’s BPM recognition had nothing to do with AcousticBrainz, I was just trying to make a point: BPM recognition is not easy, not even commercial DJ software get’s it right all the time. The question is only whether it’s “good enough” for a purpose (IMHO). Another story to proof my point: As already mentioned I personally use “Mixed in key” which is commercial and claims to be “good at it”, besides musical key it also detects BPM. Also here I see clearly wrong results from time to time. I can’t tell for sure if it happens more often with Jazz tracks than complex electronic, but recently I even found a Disco/Funk track with a very clear/easy beat that was detected enitrely wrong…AB was good enough for my purpose but obvbiously not yours.

    I think you (MetaBrainz) had to big of a goal: Getting BPM and key right for all types of music in the world :-))

    Anway, I don’t want to bother you longer via this blog but might get in contact on IRC/Matrix, is there a channel where MetaBrainz and also Essentia people (MTG) hanging around? I’m interested about the future of the Essentia project itself and have ideas about tagging of music files with Essentia data.

    Last question: I’m about to submit my digital collection to AB at the moment, so I can use the data in my “analog” collection via DiscoDOS (I do have a lot of my records in digital as well). This is my last chance to easily get key/bpm for my records before I need to take another road…….How long do I have? When will you close down submission access?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.