Fresh Releases – My (G)SoC journey with MetaBrainz

For an open source software enthusiast like me who has contributed little pieces of code and documentation to various projects for almost a decade, the idea of applying for Google Summer of Code has always been exciting and intimidating because of its grand nature. After getting some experience in web development over the past year, I decided to not give in to my self-doubts and applied for the GSoC’22 with confidence and zeal. I am Chinmay Kunkikar from India, and I would like to talk about my project with the MetaBrainz Foundation – Fresh Releases, and take you on my journey through the GSoC 2022 program.

MusicBrainz is the largest structured online database of music metadata. Today, a myriad of developers leverage this data to build their client applications and projects. According to MusicBrainz Database statistics, 2022 alone saw a whopping 366,680*, releases, from 275,749 release groups, and 91.5% of these releases have cover art.  Given that it has a plethora of useful data about music releases available, but has no useful means to visually present it to general users, the idea of building the Fresh Releases page was born.

* as of 2022-11-30

Our objective with this project was, therefore,
  1. To make music discovery easier for the users by presenting the data available from the MusicBrainz database.
  2. To use a user’s listening habits to show them personalized music release suggestions.

Now let me take you one by one through the process of execution of the aforementioned idea. First comes the design process and choices for this page, then a discussion over the implementation of the card grid, the filters component, the timeline component, the user-specific releases page, and how they were made responsive. Later on, we can talk about some pains of testing React Hooks with the Enzyme library, and an accident I had with git. We will also see how we plan to improve Fresh Releases in the future. So let’s begin!

The design process

It was a natural design choice to represent a music release with a card as it is an accepted practice on modern music apps and websites. A release card shows the metadata of a release like the date, name, release group type, artist(s), and cover art in the middle.

A Release Card

Iteration 1 – The initial approach was to show a grid of release cards with an infinite scroll. The filters were of two kinds – One where a user can switch between Upcoming Releases, New Releases, and This Week’s Releases from a button group. And the user can filter releases based on the Release Group by selecting one from a dropdown menu. To limit loading hundreds of results at a time, a Show more button was placed at the bottom of the page. This design approach was similar to the Charts tab of the ListenBrainz user page.

It was pointed out during reviews that using dropdowns for the Release Group filters and having button groups to separate New releases from Upcoming releases were unintuitive choices. We also thought of having a filter that will remove all non-cover art releases from the page for a more visual exploration of music (good one, mayhem).

Iteration 2 – To keep the UI simple but interesting, we thought of showing today’s releases in the middle of the screen, allowing users to scroll up or down to see past or future releases respectively. The practicality and the user experience of this idea were a concern initially but the implementation of similar ideas in apps like Apple’s Time Machine app was convincing enough to make us go forward with it. This time, we used the classic Holy grail layout to design the page. The card grid will be shown in the main content column, the left sidebar will be used for filters, and the right sidebar will have a timeline slider component to scroll the grid up or down. This rectified the concerns we had earlier –

  1. The dropdown menu for filters is gone.
  2. Adopting the past/future scrolling idea resulted in an intuitive UI, getting rid of the filters that separated New releases from Upcoming releases.
  3. The toggle to hide releases without a cover art can be accommodated in the filters column with this layout.

Responsive layout – Thanks to the Holy grail layout, making the page responsive for mobile and tablet would be effortless. We transpose the layout such that the left sidebar is stacked horizontally on the top of the main content section and the right sidebar stacks horizontally below it, leaving the header and footer unchanged.

A set of two buttons were later added at the top of the page to switch between sitewide (or global) releases and user-specific releases (Discussed in detail in a later section).

API Design

We built two API endpoints – one for the sitewide releases and the second one for user-specific releases.

  1. Sitewide fresh releases – This endpoint was built around the idea of the timeline feature of the UI. It optionally accepts a pivot date as an argument to show releases around that date. It also optionally accepts the number of days as an argument to show releases from days before and after the pivot date.
GET /1/explore/fresh-releases

Parameters
1. release_date – Fresh releases will be shown around this pivot date. The default is today’s date.
2. days – The number of days of fresh releases to show. Max 30 days.

Sample response

{
  "artist_credit_name": "Tar Blossom",
  "artist_mbids": [
    "313ab6d1-44e2-49eb-92e6-9e9ad2554bcd"
  ],
  "caa_id": 31955354514,
  "release_date": "2022-02-20",
  "release_group_mbid": "eadb43dd-9c2d-48b4-bf0c-4bb6baa61eb5",
  "release_group_primary_type": "Album",
  "release_mbid": "41c8921a-5fe9-4c15-ac62-0a7525271b5c",
  "release_name": "Of Mountains and Suns"
}
  1. User-specific Fresh Releases – This endpoint fetches releases for the current user within a month. This endpoint accepts no arguments. We will discuss more about User-specific releases in a separate section.
GET /1/user/{username}/fresh_releases

Some data sanitization

MusicBrainz, sometimes while collecting data from multiple sources, can create separate MBIDs of a release if there are minor changes in the metadata from two sources, resulting in duplicates of a release.

For example, {..." release_name": "Waterslide, Diving Board, Ladder to the Sky"} and {..." release_name": "Waterslide, Diving Board, Ladder To The Sky"} (notice the to the). Such results were deduplicated using lodash.uniqBy().

Filters section

Filters help users shortlist releases from specific categories, for example, Albums, Singles, or Remix, among others.

The filters component is divided into two sections – A Hide releases without coverart toggle button and a list of release group types. This list is a dynamically generated array of unique release group types from the API response object.
Multiple filters can be selected to show releases with a combination of filters.
The initial logic for the coverarts-only toggle required a lot of prop drilling through multiple components. To overcome this anti-pattern, we added the caa_id property to the API response. The caa_id is the Cover Arts Archive ID, which is available only for releases that have a valid cover art. The coverart-only toggle can thus use this property to hide the releases that don’t have a caa_id.

The Timeline

We were not sure what basic component or library should be used to implement the timeline. After struggling to find an implementation that matches our needs, a suggestion came from monkey to use the basic HTML range slider element. After contemplating this suggestion, we decided to use the rc-slider library which is based on the HTML range slider but is more React-friendly with additional useful features and styles. We got help from monkey again for the scrolling logic for the slider.

The slider shows marks with dates on them. Clicking on a date will trigger the changeHandler() function that accepts a percentage value from the slider’s current position and returns the position to scroll to on the page. This scrolls the page to the respective date. This was made possible by the createMarks() function that calculates a percent value for the number of releases per date in the releases list. This function creates an object that the slider uses to create the marks on the slider. The handleScroll() is a debounced function triggered every time the user manually changes the scroll position.

User-specific Fresh Releases page

User-specific releases or User Fresh Releases will show releases from artists that the user has listened to before. It uses a confidence score, which is the number of times the user has listened to an artist, to rank the releases in decreasing order. A user can switch between the sitewide releases and user-specific releases using the Pill buttons at the top of the page. If the user is not logged in, they will only see the sitewide releases.

Making the page responsive

Since ListenBrainz uses Bootstrap as its base styling framework, we used the Bootstrap 3 breakpoints with an additional breakpoint from Bootstrap 4 (576px) to make the page responsive for multiple screen sizes. The number of columns in the grid was pretty straightforward; we started with two columns and kept adding an extra column per breakpoint. The filters and timeline components that are vertical on desktop screens, change their orientation to horizontal on the screens up to the md breakpoint. As a future enhancement, we have plans for the timeline to look and behave like Android’s fastscroll widget (thanks to aerozol for the suggestion).

Tests

ListenBrainz uses snapshot testing for frontend to make sure there are no unexpected changes to the UI. While writing unit tests for mocking the API calls was a cakewalk, we struggled with writing tests for rendering and mocking the UI components for snapshot testing. A combination of Jest and Enzyme is used to test React components, which works quite well for React class components, but quickly becomes a nightmare when testing functional components that Fresh Releases use. Enzyme provides no APIs to test and mock hooks like the useState hook. Despite having no support for hooks, surprisingly, it can run the code inside of the hooks themselves. But there is no easy way to mock the useState setter function. As a result, there are still half-written tests of Fresh Releases. This limited support for newer features of React will only worsen over time because Enzyme is no longer actively maintained. There are discussions in the team to move away from Enzyme in favor of other testing libraries, and we believe there are good reasons to do so.

The incident with git

I messed with git reset –-hard and git push –-force on my working branch, wiping away the entire commit history from the local branch and the remote branch. We used git reflog to recover the lost commits. Git reflog is similar to git log, but instead of a list of commits of the current HEAD, it shows the list (or log) of times when HEAD (or reference) itself was changed. We checked out the previous HEAD to a new branch and all of the commit history was seen again (thanks again, lucifer). An important lesson learned that night was to never play around with git reset unless you’re 100% sure what you are doing. Use separate branches to test the commands that can mutate your commit history. But also do not panic if you do run into such situations. There is a high chance git has cached the history somewhere locally.

Git will never fail to surprise you, no matter how experienced you are in using it.

Future improvements and enhancements

To add to the list in the LB-1172 ticket,

  1. Release card grid – Remove all text content from the cards and keep just the coverarts when the “Hide releases without coverarts” filter is set. The text can be shown on the cover art when the Release Card is hovered over. And the name of the filter will be changed to “Show coverarts only.” (suggestion by aerozol)
  2. Integration of BrainzPlayer on the page will add the ability to play new releases directly on the page. To quote monkey, “the solution would be to redirect users to a playable album page on LB. For example, listenbrainz.org/player/release/f5d6d909-06dc-4811-8e13-811e6af31b82 “.
  3. Performance optimizations – One issue Fresh Releases faces is the rendering of hundreds (if not thousands) of DOM nodes on the page. This can significantly slow down the page and the browser. The temporary solution we have used is to limit the number of release days shown. But we want the page to be able to show releases of up to a month. This issue can be solved either by adding o “load more” buttons or by virtualizing or “windowing” the cards grid component using  libraries like react-window.
  4. Implement aerozol’s idea to modify the timeline on mobile screens to look and behave like Android’s fastscroll widget (discussed above).
  5. Add more tests to test combinations of filters.

My journey with MetaBrainz Foundation

While browsing the organizations, I recognized that the MetaBrainz Foundation is the brains (pun intended) behind MusicBrainz Picard, the handy tool that has kept my music library clean for years. After reading about their projects, my interest in the ListenBrainz project was piqued because I learned that it is essentially an open source alternative to Last.FM, a service where I enjoy perusing my music listening habits.

During the community bonding period of GSoC, I set up and played around with the ListenBrainz codebase, as well as closed a few tickets. During this time, I worked on adding a tooltip to the BrainzPlayer progress bar and updating node dependencies to the most recent versions (more complicated than it sounds).

Here is a list of all of my pull requests – metabrainz/listenbrainz-server/pulls.

Even though my original project proposal was not selected as a GSoC project, the MetaBrainz team was so impressed with my work that they chose to create an internship position exclusively for me. Fresh Releases was not an official GSoC project, but the team nonetheless considered it as such. You can read the story in a previous blog post.

My experience & learnings

I’ve always admired open source and the opportunities it provides. Working on projects like ListenBrainz has pushed me to contribute more to open source. In these past few months, apart from working on Fresh Releases, I also had a chance to work on other parts of the codebase. That boosted my confidence further in working on large codebases. I am fortunate to work with mayhem, lucifer and monkey, the lead developers who work in open source environments and require the extra skill of connecting with the community and being patient with new enthusiasts. monkey’s thorough code reviews motivated me to work on the project more. He would explain concepts and then provide short code snippets to show me how to implement them in the code. The entire MetaBrainz community is motivated and a joy to work with. Every design and technical decision is backed up by well-thought-out recommendations and open discussions on IRC. Developers will take note of your suggestions. Your efforts will be recognised. Working here has been a lot of fun!

P.S. Did I mention that they printed the proposals of all the selected students and put them up on the MetaBrainz office wall?!

My proposal along with others’ on the office wall!

Cleaning up the Music Listening Histories Dataset

Hi, this is Prathamesh Ghatole (IRC Nick: “Pratha-Fish”), and I am an aspiring Data Engineer from India, currently pursuing my bachelor’s in AI at GHRCEM Pune, and another bachelor’s in Data Science and Applications at IIT Madras. 

I had the pleasure to be mentored by alastairp and the rest of the incredible team at the MetaBrainz Foundation. Throughout this complicated but super fun project as a GSoC ‘22 contributor! This blog is all about my journey over the past 18 weeks.

In an era where music streaming is the norm, it is no secret that to create modern, more efficient, and personalized music information retrieval systems, the modelling of users is necessary because many features of multimedia content delivery are perceptual and user-dependent. As music information researchers, our community has to be able to observe, investigate, and gather insights from the listening behavior of people in order to develop better, personalized music retrieval systems. Yet, since most media streaming companies know that the data they collect from their customers is very valuable, they usually do not share their datasets. The Music Listening Histories Dataset (MLHD) is the largest-of-its-kind collection of 27 billion music listening events assembled from the listening histories of over 583k last.fm users, involving over 555k unique artists, 900k albums, and 7M tracks. The logs in the dataset are organized in the form of sanitized listening histories per user, where each user has one file, with one log per line. Each log is a quadruple of: 

<timestamp, artist MBID, release-MBID, recording MBID>

The full dataset contains 576 files of about 1GB each. These files are subsequently bundled in sets of 32 TAR files (summing up to ~611.39 GB in size) in order to facilitate convenient downloading.

Some salient features of the MLHD:

  • Each entity in every log is linked to a MusicBrainz Identifier (MBID) for easy linkage to other existing sources.
  • All the logs are time-stamped, resulting in a timeline of listening events.
  • The dataset is freely available and is orders of magnitudes larger than any other dataset of its kind.
  • All the data is scraped from last.fm, where users publicly self-declare their music listening histories.

What is our goal with this project?

The dataset would be useful for observing many interesting insights like:

  • How long people listen to music in a single session
  • The kinds of related music that people listen to in a single session
  • The relationship between artists and albums and songs
  • What artists do people tend to listen to together?

In its current form, the MLHD is a great dataset in itself, but for our particular use-case, we’d like to make some additions and fix a few issues inherently caused due to last.fm’s out-of-date matching algorithms with the MusicBrainz database. (All issues are discussed in detail in my original GSoC proposal)

For example:

  1. The artist conflation issue: We found that the artist MBIDs for commonly used names were wrong for many logs, where the artist MBID pointed to incorrect artists with the same name in the MusicBrainz database. e.g. For the song “Devil’s Radio” by ”George Harrison” (from the Beatles), the MLHD incorrectly points to an obscure Russian hardcore group named “George Harrison” 
  2. Multiple artist credits: The original MLHD provides only 1 single artist-MBID, even in case of recordings with multiple artists involved. We aim to fix that by providing a complete artist credit list for every recording.
  3. Complete data for every valid recording MBID: We aim to use the MusicBrainz database to fetch accurate artist credit lists and release MBIDs for every valid recording MBID, hence improving the quality and reliability of the dataset.
  4. MBID redirects: 22.7% of the recording MBIDs (from a test set of 371k unique recording MBIDs) that we tested were not suitable for our direct use. Of the 22.7% of recording MBIDs, 98.66% MBIDs were just redirected to other MBIDs (that were correct too).
  5. Non-Canonical MBIDs: A significant fraction of MBIDs were not canonical MBIDs. In the case of recording MBIDs, a release-group might use multiple valid MBIDs to represent the release, but there’s always a single MBID that is the “most representative” of the release group, known as a “canonical” MBID.

While the existing redirecting as well as non-canonical MBIDs are technically correct and identical when considered in aggregate, we think replacing these MBIDs with their canonical counterparts would be a nice addition to the existing dataset and aid in better processing. Overall, the goal of this project is to write high-performance python code to resolve the dataset as soon as possible to an updated version, in the same format as the original, but with incorrect data rectified & invalid data removed.

Checkout the complete codebase for this project at: https://github.com/Prathamesh-Ghatole/MLHD 

The Execution

Personally, I’d classify this project as a Data Science or Data Engineering task involving lots of analytics, exploration, cleanup, and moving goals and paths as a result of back-and-forth feedback from stakeholders. For a novice like me, this project was made possible through many iterations involving trial and error, learning new things, and constantly evolving existing solutions to make them more viable, and in line with the goals. Communication was a critical factor throughout this project, and thanks to Alastair, we were able to communicate effectively on the #Metabrainz IRC channel and keep a log of changes in my task journal, along with weekly Monday meetings to keep up with the community.

Skills/Technologies used

  • Python3, Pandas, iPython Notebooks – For pretty much everything
  • NumPy, Apache Arrow – For optimizations
  • Matplotlib, Plotly – For visualizations
  • PostgreSQL, psycopg2 – For fetching MusicBrainz database tables, quick-and-dirty analytics, etc.
  • Linux – For working with a remote Linux server for processing data.
  • Git – For version control & code sharing.

Preliminary Analysis

1. Checking the demographics for MBIDs

We analyzed 100 random files from the MLHD with 3.6M rows and found the following results. In the 381k unique recording MBIDs, ~22.7% were not readily usable, i.e. they had to be redirected, or had to be made canonical. However, of these ~22.7% MBIDs, ~98.66% were correctly redirected to a valid recording MBID using the MusicBrainz database’s “recording” table, implying that only ~0.301% of all UNIQUE recording MBIDs from MLHD were completely unknown (i.e. Didn’t belong to the “recording” table OR have a valid redirect). Similarly, about ~5.508% of all UNIQUE artist MBIDs were completely unknown. (Didn’t belong to the “artist” table OR “artist_gid_redirect” table)

2. Checking for the artist conflation Issue:

There are many artists with exactly the same name. But we were unsure if for such cases last.fm’s algorithms matched the correct artist MBID for a recording MBID every time. To verify this, we fetched artist MBIDs for each recording MBID in a test set and compared it to the actual artist MBIDs present in the dataset. Lo and behold, we discovered that ~9.13% of the cases faced this issue in our test set with 3,76,037 unique cases.

SOLUTION 1

This is how we first tried dealing with the artist conflation issue:

  1. Take a random MLHD file
  2. “Clean up” the existing artist MBIDs and recording MBIDs, and find their canonical MBIDs. (Discussed in detail in the section “Checking for non-canonical & redirectable MBIDs”)
  3. Fetch the respective artist name and recording name for every artist MBID and recording MBID from the MusicBrainz database.
  4. For each row, pass <artist name, recording name> to either of the following MusicBrainz APIs:
    1. https://datasets.listenbrainz.org/mbc-lookup 
    2. https://labs.api.listenbrainz.org/mbid-mapping
  5. Compare artist MBIDs returned by the above API to the existing artist MBIDs in MLHD.
  6. If the existing MBID is different from the one returned by the API, replace it.

However, this method meant making API calls for EACH of the 27bn rows of the dataset. This would mean 27 billion API calls, where each call would’ve at least taken 0.5s. I.e. 156250 days just to solve the artist conflation issue. This was in no way feasible, and would’ve taken ages to complete even if we parallelized the complete process with Apache Spark. Even after all this, the output generated by this API would’ve barely been a fuzzy solution prone to errors.

SOLUTION 2

Finally, we tackled the artist conflation issue by using the MusicBrainz database to fetch artist credit lists for each valid recording MBID using the MusicBrainz database. This enabled us to perform in-memory computations, and completely eliminated the need to make API calls, saving us a lot of processing time. This did not only make sure that every artist MBID corresponded only to its correct recording MBID accurately 100% of the time but also:

  • Improved the quality of the provided artist MBIDs by providing a list of artist MBIDs in case of records with multiple recording MBIDs.
  • Increased the count of release MBIDs in the dataset by 10.19%!
    (Test performed on the first 15 files from the MLHD, summing up to 952229 rows of data)

3. A new contender appears! (Fixing the MBID mapper)

While working out “SOLUTION 1” as discussed in the previous section, we processed thousands of rows of data, and compared the outputs by the mbc-lookup API and mbid-mapping API, and discovered that these APIs sometimes returned different outputs when they should have returned the same outputs. This uncovered a fundamental issue in the mbid-mapping API that was actively being used by listenbrainz to link music logs streamed by users to their respective entities in the MusicBrainz database. We spent a while trying to analyze the depth of this issue by generating test logs and reports for both the mapping endpoints and discovered patterns that helped point to some bugs in the matching algorithms written for the API. This new discovery helped lucifer debug the mapper, resulting in the following pull request: Fix invalid entries in canonical_recording_redirect table by amCap1712 · Pull Request #2133 · metabrainz/listenbrainz-server (github.com)

4. Checking for non-canonical & redirectable MBIDs

To use the MusicBrainz database to fetch artist names and recording names w.r.t. their MBIDs, we first had to make sure MBIDs we used to lookup the names were valid, consistent, and “clean”. This was done by:

  1. Checking if an MBID was redirected to some other MBID, and replacing the existing MBID with the MBID it redirected to.
  2. Finding a Canonical MBID for all the recording MBIDs.

We used the MusicBrainz database’s mapping.canonical_recording_redirect” table to fetch canonical recording MBIDs, and recording_gid_redirect” table to check and fetch redirects for all the recording MBIDs. We first tried mapping SQL query on every row to fetch results, but soon realized it would’ve slowed the complete process down to unbearable levels. Since we were running the processes on “Wolf” (A server at MetaBrainz Inc.) we had access to 128GB of RAM, enabling us to load all the required SQL tables in memory using Pandas, eliminating the need to query SQL tables stored on disk.

5. Checking for track MBIDs being mistaken for recording MBIDs

We suspected that some of the unknown recording MBIDs in the dataset could actually be track MBIDs disguised as recording MBIDs due to some errors in mapping. While exploring the demographics on a test sample set of 381k unique recording MBIDs, we discovered that none of the unknown recording MBIDs confirmed this case. To further verify this problem, we ran the tests on ALL recording MBIDs in the MLHD. To hit 2 birds in one iteration, we also re-compressed every file in the MLHD from GZIP compression to a more modern, ZStandard compression, since GZIP read/write times were a huge bottleneck while costing us 671GB in storage space. This process resulted in:

  • The conversion of all 594410 MLHD files from GZIP compression to ZSTD compression in 83.1 hours.
  • The dataset being reduced from 571 GB -> 268 GB in size. (53.75% Improvement!)
  • File Write Speed: 17.46% improvement.
  • File Read Speed: 39.25% deterioration.
  • Confirmed the fact that no track MBID existed in the recording MBID column of the MLHD.

Optimizations

1. Dumping essential lookup tables from the MusicBrainz database to parquet.

We used the following tables from the MusicBrainz database in the main cleanup script to query MBIDs:

  1. recording: Lookup recording names using recording MBIDs, Get a list of canonical recording MBIDs for lookups.
  2. recording_gid_redirect: Lookup valid redirects for recording MBIDs using redirectable recording MBIDs as index.
  3. mapping.canonical_recording_redirect: Lookup canonical recording MBIDs using non-canonical recording MBIDs as index.
  4. mapping.canonical_musicbrainz_data:Lookup artist MBIDs, and release MBIDs using recording MBIDs as index.

In our earlier test scripts, we mapped SQL queries over the recording MBID column to fetch outputs. This resulted in ridiculously slow lookups where a lot of time was being wasted in I/O. We decided to pre-load the tables into memory using pandas.read_sql(), which added some constant time delay at the beginning of the script, but reduced the lookup timings from dozens of seconds to milliseconds. Pandas documentation recommends using SQLAlchemy connectable to fetch SQL tables into pandas. However, we noticed that pandas.read_sql() with a psycopg2 Connector was 80% faster than pandas.read_sql() with a SQLAlchemy Connector. Even though the pandas officially doesn’t recommend using psycopg2 at all. Fetching the same tables from the database again and again was still slow, so we decided to dump all the required SQL tables to parquet, causing a further 33% improvement in loading time.

2. Migrating the CSV reading function from pd.read_csv() to pyarrow._csv.write_csv():

We started off by using custom functions based on pandas.read_csv() to read CSV files and preprocess them (rename columns, drop rows as required, concatenate multiple files if specified, etc.). Similarly, we used pandas.to_csv() to write the files. However, we soon discovered that these functions were unnecessarily slow, and a HUGE bottleneck for processing the dataset. We were able to optimize the custom functions by leveraging pandas’ built-in vectorized functions instead of relying on for loops to pre-process dataframes once loaded. This brought down the time required to load test dataframes significantly.

pandas.read_csv() and pandas.to_csv() on their own are super convenient, but aren’t super performant. Especially when you need them to compress/decompress files before reading/writing. Pandas’ reading/writing functions come with a ton of extra bells and whistles. Intuitively, we started writing our own barebones CSV reader/writer with NumPy. Turns out this method was far slower than the built-in pandas methods! We tried vectorizing our custom barebones CSV reader using Numba, an open-source JIT compiler that translates a subset of Python and NumPy code into fast machine code. However, this method too failed due to various reasons. (Mostly by my own inexperience with Numba). Finally, we tried pyarrow, a library that provides Python APIs for the functionality provided by the Arrow C++ libraries, including but not limited to reading, writing, and compressing CSV files. This was a MASSIVE success, causing +86.11% in writing speeds and 30.61% improvements in reading speeds even while writing back DataFrames as CSV with ZSTD level 10 compression!

3. Pandas optimizations

In pandas, there are often multiple ways to do the same thing, and some of them are much faster than others due to their implementation. We realized a bit too late, that pandas isn’t that good for processing big data in the first place! But I think we were pretty successful with our optimizations and made the best out of pandas too. Here are some neat optimizations that we did along the way in Pandas.

pd.DataFrame.loc() returns the whole row (a vector of values), but pd.DataFrame.at() only returns single value (a scalar). 

Intuitively, pd.DataFrame.loc() should be faster to search and return a tuple of values as compared to pd.DataFrame.at() since the latter requires multiple nested loops per iteration to fetch multiple values for a single query, whereas the prior doesn’t. However, for our use case, running pd.DataFrame.at() 2x per iteration for fetching multiple values was still ~55x faster than running pd.DataFrame.loc() once for fetching the complete row at once!

Some of the most crucial features that pandas offers are vectorized functions.
Vectorization in our context refers to the ability to work on a set of values in parallel. Vectorized functions just do a LOT more work in a single loop, enabling them to produce results wayy faster than typical for-loops, that operate on a single value per iteration. In pandas, these vectorized functions can mean speeding up operations by as much as 1000x! For MLHD, we fetched artist MBIDs and release MBIDs (based on a recording MBID) as a tuple representing a pair of MBIDs. This meant a tuple for each recording MBID, leaving us with a series of tuples, that we needed to split into two different series. The most simple solution to this issue would be to just use tuple unpacking using python’s built-in zip function as follows:

artist_MBIDs, recording_MBIDs = zip(* series_of_tuples)

For our particular case, we also had to add in an extra step of mapping a “clean up” function to the whole series before unzipping it. The mapping process in the above case was a serious bottleneck, so we had to find something better. However, in we were able to significantly speed up the above process, by avoiding apply/map functions completely, and cleverly utilizing existing vectorized functions instead. The details for the solution can be found at: quickHow: how to split a column of tuples into a pandas dataframe (github.com)

In our first few iterations, we used pandas.Series.isin() to check if a series of recording MBIDs existed in the “recording” table of the MusicBrainz database or not. Pandas functions in general are very optimized and occasionally written in C/C++/Fortran instead of Python, making them super fast. I assumed the same would be the case with pandas.Series.isin(). My mentor suggested that we use built-in Python sets and plain for-loops for this particular case. My first reaction was “there’s no way our 20 lines of Python code are gonna hold up against pandas. Everyone knows running plain for-loops against vectorized functions is a huge no-no”. But, as we kept on trying, the results completely blew me away! Here’s what we tried:

  1. Convert the index (a list) of the “recording” table from the MusicBrainz database to a Python Set.
  2. Iterate over all the recording MBIDs in the dataset using a for-loop, and check if MBIDs exist in the set using Python’s “in” keyword.

For 4,738,139 rows of data, pandas.Series.isin() took 13.1s to process. The sets + for-loop method took 1.03s! (with an additional, one-time cost of 6s to convert the index list into a Set). The magic here was done by converting the index of the “recording” table into a Python Set, which essentially puts all the values in a hashmap (which only took a constant 6 seconds at the start of the script).

A hashmap meant reducing the time complexity for search values to O(1). On the other hand, pandas.Series.isin() was struggling with at least O(n) time complexity, given that it’s essentially a list search algorithm working on unordered items. This arrangement meant only a one-time cost of converting the index to a Python Set at the start of the script, and a constant O(1) time complexity to loop through and search for items.

Final Run

As of October 20, 2022 – We’ve finally started testing for all 594410 MLHD files to process and re-write ~27 billion rows of data. The output for a test performed on the first 15 files from the MLHD, summing up to 952229 rows of data is as follows:

Here, the cleanup process involves: Fetching redirects for recording MBIDs; Fetching canonical MBIDs for recording MBIDs; Fetching artist credit lists and release MBIDs based on recording MBIDs; and Mapping the newly fetched artist credit lists and release MBIDs to their respective recording MBIDs.

The above process is completely recording MBID oriented in order to maintain quality and consistency. This means completely wiping off the data in the artist_MBID and release_MBID columns in order to replace them with reliable data fetched from the MusicBrainz database. This also means that the above process will bring a significant change in the demographics of various entities (artist MBIDs, release MBIDs, and recording MBIDs) in the final version of the dataset.

Even though the impact of changing demographics varies from file to file (depending on the user’s tendency to listen to certain recordings repeatedly), here are some statistics based on the first 15 files in the MLHD, before and after processing:

For a complete test set with 952,229 input rows, the shrinkage is as follows:
Given an input of 952,229 rows, the row count of the original MLHD shrinks to 789,788 rows after dropping rows with empty recording MBID values. (17.06% Shrinkage). After processing, given the same input, the row count of the processed MLHD shrinks to 787,690 rows after dropping rows with empty recording MBID values. (17.28% Shrinkage). Now for a fair comparison, let’s first drop all rows with empty recording MBID values from the original, as well as the processed dataset. This gives us 787,690 in the processed dataset and 789,788 in the original dataset. The absolute shrinkage between the original and processed dataset is as follows:

Abs_shrinkage = ((789788 - 787690) / 789788) * 100 = 0.27%

Therefore, the cleaning process only resulted in a shrinkage of 0.27% of the existing recording MBIDs in the MLHD! Note that this stat is also in line with our previous estimate about how ~0.301% of all recording MBIDs were completely unknown. As per the original MLHD research paper, about ~65% of the MLHD contains at least the recording MBID. We might have the option to drop the rest of the 35% of the dataset or keep the data without recording-MBIDs as it is. Out of the 65% of the MLHD with recording MBIDs, ~0.301% of the recording MBIDs would’ve to be dropped (since they’re unknown). This leaves us with: 27bn – (35% of 27bn) – (0.3% of 65% of 27bn) = 12.285bn rows of super high quality data!

Now similarly, let’s compare the row count shrinkages for different columns.

  1. Number of counts of not-empty recording MBIDs SHRINKED by 0.27%.
  2. Number of counts of not-empty release MBIDs EXPANDED by 14.08%.
  3. Number of counts of not-empty artist MBIDs SHRINKED by 13.36%

Given an average processing time per 10,000 rows of 0.2168s, we estimate the time taken to process the entire dataset will be 27,00,00,00,000 / 10,000 * 0.2168 / 3600 / 24 = 6.775 days or 162 hours

Primary Outcomes

  1. The MLHD is currently set to be processed with an ETA of ~7 days of processing time.
  2. I was able to generate various reports to explore the impact of the “artist conflation issue” in the MLHD. These extra insights and reports uncovered a few issues within the MusicBrainz ID Mapping lookup algorithm, which resulted in lucifer fixing Fix invalid entries in canonical_recording_redirect table by amCap1712 · Pull Request #2133 · metabrainz/listenbrainz-server (github.com)

Miscellaneous Outcome

How I got picked as a GSoC candidate without a single OSS PR to my name beforehand is beyond me, but with the help of alastairp and lucifer, I was able to solve and merge PRs for 2 issues in the listenbrainz-server as an exercise to gain get to know the listenbrainz codebase a little better.

My Experience

This journey has been nothing but amazing! The sheer amount of things that I learned during these past 18 weeks is ridiculous. I really appreciate the fun people and work culture here, which was even more apparent during the MetaBrainz Summit 2022 where I had the pleasure to see the whole team in action on a live stream.

Coming from a Music Tech background and having extensively used MetaBrainz products in the past, it was a surreal experience being able to work with these supersmart engineers who have worked on technologies I could only dream of making. I often found myself admiring my seniors as well as peers for their ability to come up with pragmatic solutions with veritable speed and accuracy, especially lucifer, whose work ethic inspired me the most! I hope some of these qualities eventually rub off on me too 🙂

I’d really like to take time to appreciate my mentor, alastairp for always being super supportive, and precise in his advice, and helping me push the boundaries whenever possible. I’d also like to thank him for being very considerate, even through times when I’ve been super unpredictable and unreliable, and not to mention, giving me an opportunity to work with him in the first place!

Suggestions for aspiring GSoC candidates

  • Be early.
  • Ask a lot of questions, but do your due diligence by exploring as much as possible on your own as well.
  • OSS codebases might seem ginormous to most students with limited work experience. Ingest the problem statement bit by bit, and slowly work your way toward potential solutions with your mentor.
  • Believe in yourself! It’s not a mission impossible. You always miss the shots that you don’t take.

You can contact me at:
IRC: “Pratha-Fish” on #Metabrainz IRC channel
Linkedin: https://www.linkedin.com/in/prathamesh-ghatole
Twitter: https://twitter.com/PrathameshG69 
GitHub: https://github.com/Prathamesh-Ghatole

MetaBrainz Summit 2022

The silliest, and thus best, group photo from the summit. Left to right: Aerozol, Monkey, Mayhem, Atj, lucifer (laptop), yvanzo, alastairp, Bitmap, Zas, akshaaatt

After a two-year break, in-person summits made their grand return in 2022! Contributors from all corners of the globe visited the Barcelona HQ to eat delicious local food, sample Monkey and alastairp’s beer, marvel at the architecture, try Mayhem’s cocktail robot, savour New Zealand and Irish chocolates, munch on delicious Indian snacks, and learn about the excellent Spanish culture of sleeping in. As well as, believe it or not, getting “work” done – recapping the last year, and planning, discussing, and getting excited about the future of MetaBrainz and its projects.

We also had some of the team join us via Stream; Freso (who also coordinated all the streaming and recording), reosarevok, lucifer, rdswift, and many others who popped in. Thank you for patiently waiting while we ranted and when we didn’t notice you had your hand up. lucifer – who wasn’t able to come in person because of bullshit Visa rejections – we will definitely see you next year!

A summary of the topics covered follows. The more intrepid historians among you can see full event details on the wiki page, read the minutes, look at the photo gallery, and watch the summit recordings on YouTube: Day 1, Day 2, Day 3

OAuth hack session

With everyone together, the days before the summit proper were used for some productive hack sessions. The largest of which, involving the whole team, was the planning and beginning of a single OAuth location – meaning that everyone will be sent to a single place to login, from all of our projects.

A great warmup for the summit, we also leapt forward on the project, from identifying how exactly it would work, to getting substantial amounts of code and frontend elements in place.

Project recaps

“I broke this many things this year”

To kick off the summit, after a heart-warming introduction by Mayhem, we were treated to the annual recap for each project. For the full experience, feast your eyeballs on the Day 1 summit video – or click the timestamps below. What follows is a eyeball-taster, some simplistic and soothing highlights.

State of MetaBrainz (Mayhem) (4:50)

  • Mayhem reminds the team that they’re kicking ass!
  • We’re witnessing people getting fed up with streaming and focusing on a more engaged music experience, which is exactly the type of audience we wish to cater to, so this may work out well for us.
  • In 2023 we want to expand our offerings to grow our supporters (ListenBrainz)
  • Currently staying lean to prepare for incoming inflation/recession/depression

State of ListenBrainz (lucifer) (57:10)

  • 18.4 thousand all time users
  • 595 million all time listens
  • 92.3 million new listens submitted this year (so far)
  • Stacks of updates in the last year
  • Spotify metadata cache has been a game changer

State of Infrastructure (Zas) (1:14:40)

  • We are running 47 servers, from 42 in 2019
  • 27 physical (Hetzner), 12 virtual (Hetzner), 8 active instances (Google)
  • 150 Terabytes served this year
  • 99.9% availability of core services
  • And lots of detailed server, Docker, and ansible updates, and all the speed and response time stats you can shake a stick at.

State of MusicBrainz (Bitmap) (1:37:50)

  • React conversion coming along nicely
  • Documentation improved (auto-generated schema diagrams)
  • SIR crashes fixed, schema changes, stacks of updates (genres!)
  • 1,600 active weekly editors (stable from previous years)
  • 3,401,467 releases in the database
  • 391,536 releases added since 2021, ~1,099 per day
  • 29% of releases were added by the top 25 editors
  • 51% of releases were added with some kind of importer
  • 12,607,881 genre tag votes
  • 49% of release groups have at least one genre
  • 300% increase in the ‘finnish tango’ genre (3, was 1 in 2021)

State of AcousticBrainz (alastairp) (21:01:07)

  • R.I.P. (for more on the shut down of AB, see the blog post)
  • 29,460,584 submissions
  • 1.2 million hits per day still (noting that the level of trust/accuracy of this information is very low)
  • Data dumps, with tidying of duplicates, will be released when the site goes away

State of CritiqueBrainz (alastairp) (2:17:05)

  • 10,462 total reviews
  • 443 reviews in 2022
  • Book review support!
  • General bug squashing

State of BookBrainz (Monkey) (2:55:00)

  • A graph with an arrow going up is shown, everyone applauds #business #stonks
  • Twice the amount of monthly new users compared to 2021
  • 1/7th of all editions were added in the last year
  • Small team delivering lots of updates – author credits, book ratings/reviews, unified addition form
  • Import plans for the future (e.g. Library of Congress)

State of Community (Freso) (3:25:00)

  • Continuing discussion and developments re. how MetaBrainz affects LGBTQIA2+ folks
  • New spammer and sockpuppet countermeasures
  • Room to improve moderation and reports, particularly cross-project

Again, for delicious technical details, and to hear lots of lovely contributors get thanked, watch the full recording.

Discussions

“How will we fix all the things alastairp broke”

Next (not counting sleep, great meals, and some sneaky sightseeing) we moved to open discussion of various topics. These topics were submitted by the team, topics or questions intended to guide our direction for the next year. Some of these topics were discussed in break-out groups. You can read the complete meeting minutes in the summit minutes doc.

Ratings

Ratings were added years ago, and remain prominent on MusicBrainz. The topic for discussion was: What is their future? Shall we keep them? This was one of the most popular debates at the summit, with input from the whole spectrum of rating lovers and haters. In the end it was decided to gather more input from the community before making any decisions. We invite you to regale us with tales of your useage, suggestions, and thoughts in the resulting forum thread. 5/5 discussion.

CritiqueBrainz

Similar to ratings, CritiqueBrainz has been around for a number of years now and hasn’t gained much traction. Another popular topic, with lots of discussion regarding how we could encourage community submissions, improvements that could be made, how we can integrate it more closely with the other projects. Our most prolific CB contributor, sound.and.vision, gave some invaluable feedback via the stream. Ultimately it was decided that we are happy to sunset CB as a website (without hurry), but retain its API and integrate it into our other projects. Bug fixes and maintenance will continue, but new feature development will take place in other projects.

Integrating Aerozol (design)

Aerozol (the author of this blog post, in the flesh) kicked us off by introducing himself with a little TED talk about his history and his design strengths and weaknesses. He expressed interest in being part of the ‘complete user journey’, and helping to pull MetaBrainz’ amazing work in front of the general public, while being quite polite about MeB’ current attempts in this regard. It was decided that Aerozol should focus on over-arching design roadmaps that can be used to guide project direction, and that it is the responsibility of the developers to make sure new features and updates have been reviewed by a designer (including fellow designer, Monkey).

MusicBrainz Nomenclature

Can MetaBrainz sometimes be overly-fond of technical language? To answer that, ask yourself this: Did we just use the word ’nomenclature’ instead of something simpler, like ‘words’ or ‘terms’, in this section title? Exactly. With ListenBrainz aiming for a more general audience, who expect ‘album’ instead of ‘release group’, and ‘track’ instead of ‘recording’, this was predicted to become even more of an issue. Although it was acknowledged that it’s messy and generally unsatisfying to use different terms for the same things within the same ‘MetaBrainz universe’, we decided that it was fine for ListenBrainz to use more casual language for its user-facing bits, while retaining the technical language behind the scenes/in the API.

A related issue was also discussed, regarding how we title and discuss groupings of MusicBrainz entities, which is currently inconsistent, such as “core entities”, “primary entities”, “basic entities”. No disagreements with yvanzo’s suggestions were raised, the details of which can be found in ticket MBS-12552.

ListenBrainz Roadmap

Another fun discussion (5/5 – who said ratings weren’t useful!), it was decided that for 2023 we should prioritize features that bring in new users. Suggestions revolved around integrating more features into ListenBrainz directly (for instance, integrating MusicBrainz artist and album details, CritiqueBrainz reviews and ratings), how to promote sharing (please, share your thoughts and ideas in the resulting forum thread), making landing pages more inviting for new users, and how to handle notifications.

From Project Dev to Infrastructure Maintenance

MetaBrainz shares a common ‘tech org’ problem, stemming from working in niche areas which require high levels of expertise. We have many tasks that only one or a few people know how to do. It was agreed we should have another doc sprint, which was scheduled for the third week of January (16th-20th).

Security Management / Best Practices

Possible password and identity management solutions were discussed, and how we do, and should, deal with security advisories and alerts. It was agreed that there would be a communal security review the first week of each month. There is a note that “someone” should remember to add this to the meeting agenda at the right time. Let’s see how that pans out.

Search & SOLR

Did you know that running and calibrating search engines is a difficult Artform? Indeed, a capital a Artform. Our search team discussed a future move from SOLR v7 to SOLR v9 (SOLR is MusicBrainz’ search engine). It was discussed how we could use BookBrainz as a guinea pig by moving it from ElasticSearch (the search engine BB currently runs on) to SOLR, and try finally tackle multi-entity search while we are at it. If you really like reading about ‘cores’, ‘instances’, and whatever ‘zookeeper’ is, then these are your kind of meeting minutes.

Weblate

We currently use Transifex to translate MusicBrainz to other languages (Sound interesting? Join the community translation effort!), but are planning to move to Weblate, an open-source alternative that we can self-host. Pros and cons were discussed, and it seems that Weblate can provide a number of advantages, including discussion of translation strings, and ease of implementation across all our projects. Adjusting it to allow for single-sign on will involve some work. Video tutorials and introducing the new tool to the community was put on the to-do list.

Listenbrainz Roadmap and UI/UX

When a new user comes to ListenBrainz, where are they coming from, what do they see, where are we encouraging them to click next? Can users share and invite their friends? Items discussed were specific UI improvements, how we can implement ‘calls to action’, and better sharing tools (please contribute to the community thread if you have ideas). It was acknowledged that we sometimes struggle at implementing sharing tools because the team is (largely) not made up of social media users, and that we should allow for direct sharing as well as downloading and then sharing. Spotify, Apple Music, and Last.FM users were identified as groups that we should or could focus on.

Messages and Notifications

We agreed that we should have a way of notifying users across our sites, for site-user as well as user-user interactions. There should be an ‘inbox-like’ centre for these, and adequate granular control over the notification options (send me emails, digests, no emails, etc.), and the notification UI should show notifications from all MeB projects, on every site. We discussed how a messaging system could hinder or help our anti-spam efforts, giving users a new conduit to message each other, but also giving us possible control (as opposed to the current ‘invisible’ method of letting users direct email each other). It was decided to leave messaging for now (if at all), and focus on notifications.

Year in Music

We discussed what we liked (saveable images, playlists) and what we thought could be improved (lists, design, sharing, streamlining), about last years Year in Music. We decided that this year each component needs to have a link so that it can be embedded, as well as sharing tools. We decided to publish our Year in Music in the new year, with the tentative date of Wednesday January 4th, and let Spotify go to heck with their ’not really a year yet’ December release. We decided to use their December date to put up a blog post and remind people to get their listens added or imported in time for the real YIM!

Mobile Apps

The mobile app has been making great progress, with a number of substantial updates over the last year. However it seems to be suffering an identity crisis, with people expecting it to be a tagger on the level of Picard (or not really knowing what they expect), and then leaving bad reviews. After a lot of discussion (another popular and polarising topic!) it was agreed to make a new slimmed-down ListenBrainz app to cater to the ListenBrainz audience, and leave the troubled MusicBrainz app history behind. An iOS app isn’t out of the question, but something to be left for the future. akshaaatt has beaten me to the punch with his blog post on this topic.

MusicBrainz UI/UX Roadmap

The MusicBrainz dev and design team got together to discuss how they could integrate design and a broader roadmap into the workflow. It was agreed that designers would work in Figma (a online layout/mockup design tool), and developers should decide case-by-case whether an element should be standalone or shared among sites (using the design system). We will use React-Bootstrap for shared components. As the conversion to React continues it may also be useful to pull in designers to look at UI improvements as we go. It was agreed to hold regular team meetings to make sure the roadmap gets and stays on track and to get the redesign (!) rolling.

Thank you

Revealed! Left to right: Aerozol, Monkey, Mayhem, Atj, lucifer (laptop), yvanzo, alastairp, Bitmap, Zas, akshaaatt

On behalf of everyone who attended, a huge thanks to the wonderful denizens of Barcelona and OfficeBrainz for making us all feel so welcome, and MetaBrainz for making this trip possible. See you next year!

GSoC’22: Personal Recommendation of a track

Hi Everyone!

I am Shatabarto Bhattacharya (also known as riksucks on IRC and hrik2001 on Github). I am an undergraduate student pursuing Information Technology from UIET, Panjab University, Chandigarh, India. This year I participated in Google Summer of Code under MetaBrainz and worked on implementing a feature to send personal recommendation of a track to multiple followers, along with a write up. My mentors for this project were Kartik Ohri (lucifer on IRC) and Nicolas Pelletier (monkey on IRC)

Proposal

For readers, who are not acquainted with ListenBrainz, it is a website where you can integrate multiple listening services and view and manage all your listens at a unified platform. One can not only peruse their own listening habits, but also find other interesting listeners and interact with them. Moreover, one can even interact with their followers (by sending recommendations or pinning recordings). My GSoC proposal pertained to creating one more such interaction with your followers, sending personalized recommendations to your followers.

Community Bonding Period

During the community bonding period, I was finishing up working on implementing feature to hide events in the feed of the user and correcting the response of missing MusicBrainz data API. I had also worked on ListenBrainz earlier (before GSoC), and had worked on small tickets and also had implemented deletion of events from feed and displaying missing MusicBrainz data in ListenBrainz.

Coding Period (Before midterm)

While coding, I realized that the schema and the paradigm of storing data in the user_timeline_event that I had suggested in the proposal won’t be feasible. Hence after discussion with lucifer in the IRC, we decided to store recommendations as JSONB metadata with an array of user IDs representing the recommendees. I had to scratch my brain a bit to polish my SQL skills to craft queries, with help and supervision from lucifer. There was also a time where the codebase for the backend that accepted requests from client had a very poorly written validation system, and pydantic wouldn’t work properly when it came to assignment after a pydantic data-structure had been declared. But after planning the whole thing out, the backend and the SQL code came out eloquent and efficient. The PR for that is here.

{
     "metadata": {
        "track_name": "Natkhat",
        "artist_name": "Seedhe Maut",
        "release_name": "न",
        "recording_mbid": "681a8006-d912-4867-9077-ca29921f5f7f",
        "recording_msid": "124a8006-d904-4823-9355-ca235235537e",
        "users": ["lilriksucks", "lilhrik2001", "hrik2001"],
        "blurb_content": "Try out these new people in Indian Hip-Hop! Yay!"
    }
 }

Example POST request to the server, for personal recommendation.

Coding Period (After midterm)

After the midterm, I started working on creating the modal. At first my aim was to create a dropdown menu for search bar using Bootstrap (as most of the code had bootstrap rather than being coded from scratch). But after a while I consulted with lucifer and monkey and went for coding it from scratch. I had also planned to implement traversing search results using arrow keys, but the feature couldn’t be implemented in due time. Here are screenshots of what was created in this PR.

Accessing menu button for personal recommendation

A modal will appear with a dropdown of usernames to choose for personal recommendation

Modal with usernames chosen and a small note written for recommendation

If one has grokked my proposal, they might already notice that the UI/UX of the coded modal is different from the one proposed. This is because while coding it, I realized that the modal needs to not only look pretty but also go well with the design system. Hence the pills were made blue in color (proposed color was green). While I was finishing up coding the view for seeing recommendations in the feed, I realized that the recommender might want to see the people they had recommended. So, I asked lucifer and monkey, if they would like such feature, and both agreed, hence this UI/UX was born:

Peek into the feed page of recommender

What a recommendee sees

Special thanks to CatQuest and aerozol for their feedbacks over the IRC regarding the UI/UX!

Experience

I am really glad that I got mentorship from lucifer and monkey. Both are people whom I look up to, as they both are people who are not only good at their crafts but are also very good mentors. I really enjoy contributing to ListenBrainz because almost every discussion is a round table discussion. Anyone can chime in and suggest and have very interesting discussions on the IRC. I am very glad that my open source journey started with MetaBrainz and its wholesome community. It is almost every programmer’s dream to work on projects that actually matter. I am so glad that I had the good luck to work on a project that is actually being used by a lot of people and also had the opportunity to work on a large codebase where a lot of people have already contributed. It really made me push my boundaries and made me more confident about being good at open source!

GSoC 2021: Pin Recordings and CritiqueBrainz Integration in ListenBrainz

Hi! I am Jason Dao, aka jasondk on IRC. I’m a third year undergrad at University of California, Davis. This past summer, I’ve been working with the MetaBrainz team to add some neat features to the project ListenBrainz.

Continue reading “GSoC 2021: Pin Recordings and CritiqueBrainz Integration in ListenBrainz”

Kartik Ohri joins the MetaBrainz team!

I’m pleased to announce that Kartik Ohri, AKA Lucifer, a very active contributor since his Code-in days in 2018, has become the latest staff member of the MetaBrainz Foundation!

Kartik has been instrumental in rewriting our Android app and more recently has been helping us with a number of tasks, including new features for ListenBrainz, AcousticBrainz as well as breathing some much needed life into the CritiqueBrainz project.

These three projects (CritiqueBrainz, ListenBrainz and AcousticBrainz) will be his main focus while working for MetaBrainz. Each of these projects has not had enough engineering time recently to sufficiently move new features forward. We hope that with Kartik’s efforts we can deliver more features faster.

Welcome to the team, Kartik!

Playlists and personalised recommendations in ListenBrainz

Just in time for Christmas we are pleased to announce a new feature in our most recent release of ListenBrainz, the ability to create and share your own playlists! We created two playlists for each user who used ListenBrainz containing music that you listened to in 2020. Check out your lists at https://listenbrainz.org/my/recommendations. Read on for more details…

With our continuing work on using data in ListenBrainz to generate recommendations, we realised that we needed a place to store lists of music. That sounded like playlists to us, so we added them to ListenBrainz. As always, we did this work in the public ListenBrainz repository. You can now create your own playlists with the web interface or by using the API. Recordings in playlists map to MusicBrainz identifiers. If you’re trying to add something and can’t find it, make sure that it’s in MusicBrainz first.

Once you have a playlist, you can listen to it using our built-in BrainzPlayer, or export it to Spotify if you have an account there. If you have already linked your Spotify account to ListenBrainz you may have to re-authenticate and give us permission to create playlists on your behalf. Playlists can also be exported in the open JSPF format using the ListenBrainz API.

Over the last year we’ve started thinking about how to use data in MetaBrainz projects to generate recommendations of new music for people to listen to. For this reason, we started the Troi recommendation framework. This python package allows developers to build pipelines that take data from different sources and combine it in order to generate recommendations of music to listen to. We have already developed data sources using MusicBrainz, ListenBrainz, and AcousticBrainz. If you are a developer interested in working on recommendations in the context of ListenBrainz we encourage you to check it out.

Now that we can store playlists we needed some content to fill them with. Luckily we have some great projects worked on by students over the last few years as part of MetaBrainz’ participation in the Google Summer of Code project, including this year’s work on statistics and summary information by Ishaan. Using Troi and ListenBrainz statistics, we got to work. Every user who has been contributing data to ListenBrainz recently now has two brand new 2020 playlists based on the top recordings that you listened to in 2020 and the recordings that you first listened to in 2020. If you’re interested in the code behind these playlists, you can see the code for each (top tracks, first tracks) in the troi repository.

If you’re a long-time user of ListenBrainz you may be familiar with the problem of matching your listens to content in MusicBrainz to be able to do things with it. We’ve been working hard on a solution to this problem and have built a new tool using typesense to provide a quick and easy way to search for items in the MusicBrainz database. You are using this tool when you create a playlists using the web interface and search for a recording to add. This is still a tech preview, but in our experience it works really well. Thanks to the team at typesense for helping us with our questions over the last few weeks!

This work is still in its early days. We thought that this was such a great feature that we wanted to get it out in front of you now. We’re happy to take your feedback, or hear if you are having any problems. Open a ticket on our bug tracker, come and talk to us on IRC, or @ us. Did we give you a bad jam? Sorry about that! We’d love to have a conversation about what went well and what didn’t in order to improve our systems. In 2021 we will start generating weekly and daily playlists for users based on your recent listens using our collaborative filtering recommendations system.

Merry Christmas from the whole MetaBrainz team!

GSoC 2020: Manage your listens better with ListenBrainz

Hey! My name is Shivam Kapila (shivam-kapila on IRC) and I am a final year undergrad at National Institute of Technology Hamirpur. I have been working on the ListenBrainz project this Summer as a participant of the Google Summer of Code program. The past four months were full of fun, hacking and loads of music!!

Landing into the MetaBrainz Community!

My journey with MetaBrainz began in late January this year, when I introduced myself to the community. My first PR improving the developer documentation was by adding parts connected with setting up the Spark infrastructure on a local setup along with consolidating and improving bits of documentation. I delved into real code while implementing front end components for Deleting Listens. Over the next few months, I fixed various bugs like making the Importer Modal responsive, fixing the DB setup scripts, fixing pagination issues while browsing listens, handling stat calculation errors in the Spark Reader and flushing user stats when they delete their listens.

As a GSoC applicant, I proposed to add various Listen Management features like love/hate (aka feedback) and deleting individual listens in ListenBrainz. I also proposed a new design for the Listens page. This involved a lot of designing and research, going through UI/UX design guidelines and tuning colors, shades and shadows till we arrived at a presentable and subtle design.

And finally I onboarded the GSoC train 🙂 .

Bonding with the community

I had been a part of the community since January so I was familiar with how things work in ListenBrainz. So I decided to contribute to the TimescaleDB migration where we moved our primary listen store from InfluxDB to TimescaleDB, opening up a ton of features for us to work on. Here is the final migration PR containing the commits of my contribution.

I also contributed to easing the testing infrastructure for devs to test the patches on their local setups. Following this I upgraded the postgres-client to PG12 version when we migrated to Postgres 12. I also fixed a minor font bug on the profile page.

The GSoC journey begins

Laying the base

As the official coding period began, I started working on my proposed tasks. The first question was: how to store the feedback? So I began implementing the database changes to store the recording feedback and applying the necessary changes in production. Following this I added a Python module to interact with the database and implemented a Pydantic model to validate the feedback records before they are stored in the database or served over the API. Then I added the necessary APIs to store and fetch the feedback for a given user or recording. This was followed by improving the efficiency of the DB module.

I also worked on dumping the recording feedback in the ListenBrainz public dumps. Since ListenBrainz had migrated the stats calculation infrastructure from Google BigQuery to Apache Spark I also removed the BigQuery references from the ListenBrainz website. Now that the timescale migration work became stable, I began working on Delete a Listen feature.

Pulling out the front end brushes

Now that the base was ready for us to work on, I started working on the React components so that the feedback and deletion feature could actually be presented on the website. Around the same time, the Timescale release day was also getting near, so I helped with a few tests and finished up the work for deleting listens. The front end components also started looking good and we were ready to associate the back end with them.

Rectifying & Reactifying

It’s high time and the final phase started. Now that we were ready with a few components we needed some tweaks in some production components to make them subtle. Hence I shot an improvement PR to tweak some shadows, adjust some fonts, adjust heights of the components, sticking the footer to the bottom, and reactify the loading spinner. Then came the Listen Count Card denoting the number of listens for a user. Following this we moved to Card based design for displaying listens.

This was followed by the much awaited feedback controls and now we can love/hate the songs from our listen collection. Isn’t this amazing! There were some needed minor tweaks needed to handle the ‘playing now’ listens correctly. At the same time, following the MetaBrainz guidelines to write quality code, I worked on making the SQL queries more readable. Then came the much awaited Delete a Listen feature and now we can finally get rid of the embarrassing listens!!

I also addressed some high priority tasks like giving the users an option to download their submitted feedback as JSON. We noticed some UI glitches and then came three back to back PRs to update feedback control shades, improving the listen time text and smoothing up the deletion animation. This is how the listen list looks like:

List of listens

What’s next??

Oh, now comes the time when we talk about the current scenario. The tasks currently on my radar are adding cover art support so that the page looks more alive and improving the Spotify imports to only import listens that were listened by the user after the latest Spotify listen we have for them.

After this I aim to work on the recommendation stuff that’s being actively pursued by the team. Also Mr_Monkey and me had been working on some design concepts for the All New ListenBrainz. I am pretty excited to work on it. Wanna take a sneak peek?

A new fam

The journey with MetaBrainz has been so amazing, that I am so tempted to stick here. I feel ecstatic to be a part of GSoC with the best org 🙂 . The best part is – it’s never all about code. There’s a lot to gain. Each day marked gaining maturity and thinking more and more like a real developer. I started feeling at ease with the communicate → code → integrate chain. It really feels fortunate to be a part of the MetaBrainz family where everyone is a ping away ❤ .

GSoC marks the kickstart of my journey with MetaBrainz and I will be here lurking on IRC, shooting PRs to make the projects more and more awesome.

Heartiest Gratitude

GSoC 2020: Adding Statistics and Graphs for ListenBrainz Users and Community

Hey everyone! I am Ishaan Shah (ishaanshah), a sophomore at International Institute of Information Technology – Hyderabad, India. This summer, I worked on ListenBrainz as a participant in Google Summer of Code ’20. My project involved generating statistics and visualisations for users using Apache Spark. This blog is an overview about the work I did and my experience working with ListenBrainz.

I started contributing to ListenBrainz in January 2020. My first PR was for LB-179, a small Quality of Life improvement to the LastFM importer. My first major contribution was porting the LastFM importer to ReactJS. Over the next two months, I continued working on the frontend, where I mainly worked on improving the frontend infrastructure by adding support for automated testing, porting the codebase to TypeScript and standardising the frontend code using ESLint and Prettier.

After making a few patches, I understood how ListenBrainz worked and got comfortable with the codebase. I decided to make a proposal for adding statistics to ListenBrainz using Apache Spark. While writing the proposal, I referred to many other websites, blogs, as well as community discussions for different ideas about statistics which could be added. After some research, I narrowed down on the specific graphs and statistics that I wanted to calculate during GSoC.

Community Bonding Period

Since I had been working with the MetaBrainz community since January, I was familiar with how things worked in the community. So we decided to use the Community Bonding Period for fixing and updating the Top Artists charts for a user. The first task that I took up was to add an API endpoint for fetching the Top Artists data for a user programmatically. Until then, I had mostly spent my time working on the frontend, this task helped me in getting familiar with the backend architecture. Next, I worked on porting the Top Artist graph from d3 to nivo – a charting library built with ReactJS and d3. The Top Artists graph only supported All Time statistics before. I worked on adding support for more time ranges. This was the first time I worked with Apache Spark and the PR for this took quite some time, but it was essential that we got it right as most of the statistics we built further would use a similar workflow. After we were satisfied with the overall flow of the data from our Spark cluster to the web server, I started working on showing the stats for different time ranges on the website. Although this task seemed easy at first, it took much longer than expected. We encountered some bugs and received some user feedback when we deployed the graph to production. The rest of this period was spent on incorporating the user feedback and fixing the bugs.

Top Artist shown on the Charts page
Top Artists

First Coding Period

We now had a somewhat stable pipeline for calculating the stats and sending them to the server. I started working on the backend for Top Releases stats for a user. We ran into memory issues when calculating these stats on the cluster, so I spent some time finding the cause of the issue and realised that we were collecting the results all at once which was causing the driver to run out of memory. I fixed this by collecting the results for each user separately and tweaking some RabbitMQ parameters to make sure that messages aren’t dropped while sending them to the server (PR #897). After this, I added Top Recordings for a user. Now we had a brand new Charts page that displayed the user’s Top Artists/Releases/Recordings for different time ranges. Next I started working on temporal statistics for a user i.e, number of listens in a past time range. The query that I wrote for calculating this data turned out to be pretty inefficient for larger datasets. So I ended up writing two versions of the same query: one for large datasets and one for smaller ones. While working on displaying these stats on the frontend, I tried various representations of the data. I finally settled on displaying the data as bar graphs, as shown on this report view.

Listening Activity shown on Reports page
Listening Activity

Second Coding Period

I added two more graphs in this period: Daily Activity and Artist Origins. The Daily Activity graph shows the number of listens a user has at a particular time of the day. I implemented the query for calculating this data in a slightly different way compared to the Listening Activity query. This change improved the query speed significantly. I had some trouble finding a correct way to represent this data. My mentor helped me in this by suggesting the usage of a Heatmap, and the results turned out to be pretty good.

Daily Activity shown on Reports page
Daily Activity

Next, we worked on the Artist Origins graph, which provides an insight into the geographical diversity of a user’s musical taste. I had a lot of help from the ListenBrainz team for this graph and I couldn’t have done this graph without their help. This was by far the most interesting stat that I worked on during the project. Furthermore it laid a general framework to calculate statistics using the data from MusicBrainz. After deploying this map on production, we received feedback from the users that the map looked plain for most of them and there wasn’t much colour difference between different regions. This happened because people generally tend to listen more songs from their home country, so there is a huge difference between the country with maximum artists and average number artists from other countries. We fixed this issue by changing the colour scale from linear to logarithmic.

Comparison between linear and log scale in Artist Origins Map

Final Coding Period

We now turned our attention towards calculating some stats for the whole website. We decided to make a graph for the Top Artists over different time ranges. We thought that this would be relatively easy given that we had already done something similar for individual users before. However we hit an unexpected bump; the data we were calculating was not accurate, mainly because of various different sources of the artists and some minor changes in the artists’ name or metadata resulted in a different entry with a different listen count for the same artist. Moreover, we found a couple of users spamming our website for self promotion and we did not have a solid way to deal with this. Around this time, my college resumed and the amount of time I could dedicate to LB reduced severely. So we decided to use the remaining time to work on improving the frequency at which stats are updated. I have an open PR (#1052) for doing this at the time of me writing this blog and we should be able to implement this functionality in the near future.

Artist Origins shown on reports page
Artist Origins

Experience

The past 4 months have taught me a lot of things. I learnt new technical concepts everyday. I started writing code as a developer rather than a programmer. I understood the importance of proper unit and integration testing (even though it was my least favourite part while adding a new functionality). I also found it much easier to talk and interact with people both online and in real life. Frequent deployments of new features to production helped us a lot. We were able to catch bugs when we still had some context over the code written and also received feedback from the users about how we could improve the new features added. It also kept me motivated to keep working on new graphs and statistics and gave me a sense of satisfaction when I saw them on the production server. I also learnt that things don’t always go the way we expect them to. More often than not, you will run into some bumps while adding new features so it is better to keep some extra time to deal with these issues.

GSoC gave me a wonderful opportunity to work with some amazing people from all over the globe. I was not able to complete all the graphs that I had planned for this summer, but I do plan to continue working on ListenBrainz to add more statistics and new features.

Special Thanks

  • Param Singh (iliekcomputers) for being an amazing mentor and helping me whenever I was stuck on an issue.
  • Robert Kaye (ruaok) for providing some really insightful feedback and the MusicBrainz data that was required for calculating the Artist Origin map.
  • Nicolas Pelletier (Mr_Monkey) for helping me with the frontend for the user Charts page and providing some amazing tips for ReactJS.

Migration to TimescaleDB complete!

Yesterday I posted about why we decided to make the switch to TimescaleDB and then later in the day we actually made the switch!

We are now running a copy of InfluxDB and a copy of TimescaleDB at the same time — in case we find problems with the new TimescaleDB database, we can revert to the InfluxDB database.

In the process of migrating we got rid of a pile of nasty duplicates that used to be created by importing from last.fm. We also got rid of some bad data (timestamp 0 listens) that were pretty much useless and were cluttering the data. If you find that you are missing some data besides some duplicates, please open a ticket.

The move to TimescaleDB allows us to create new features such a deleting a listen (which should be released later this summer) and various other features that because the underlying DB is much more flexible than InfluxDB. However, right this second there are no real new features for end users — more new features are coming soon, we promise!

Thank you to shivam-kapila, iliekcomputers and ishaanshah — thanks for helping with this rather large, long running project!