GSoC 2018: A way to associate listens with MBIDs

Hi, I’m Kartikeya Sharma, a postgrad student at National Institute of Technology, Hamirpur. I’ve worked on the project MessyBrainz as a student developer for GSoC 2018. Robert Kaye mentored me during this GSoC programme. The goal of my project is associating MBIDs to MSIDs and clustering together the MSIDs which represents the same MBID. The MBIDs represent MusicBrainz Identifier. It is an Universally Unique Identifier that is permanently assigned to each entity in the MusicBrainz database, MSID represents MessyBrainz Identifier which is associated with each unique recording, artist_credit and release in MessyBrainz database. In simple words MSIDs represents unclean metadata whereas MBIDs represent clean metadata.

This blog post summarizes the work that I did in my project, which was divided into three parts.

Processing the data already in MessyBrainz database

The first part involves creating clusters using the MBIDs already present in the MessyBrainz database. This involves creating clusters for recordings, artists, and releases. To implement this part I created the following three PRs #37, #41, and #44.

After that, I began to work on the second part of this which involves creating clusters using the artist MBIDs and release MBIDs and names fetched from MusicBrainz database. I needed to access MusicBrainz database, for that, I first had to work on BrainzUtils to have methods to access MusicBrainz database to fetch artist MBIDs using recording MBIDs and release name and release MBIDs using recording MBIDs. The part to fetch artist MBIDs was done during the community bonding period in PR #14 at BrainzUtils and to fetch releases I created PR #18 at BrainzUtils during GSoC coding period. After that, I created a PR to create clusters using the fetched artist MBIDs #47 and another one to create clusters using releases fetched #49.

I did write around 60 tests which proved to be vital in making sure that the code does what it’s supposed to do.

Processing the data as it is inserted into the MessyBrainz database

Creating clusters for the data inside the database requires a lot of resources. So, it was better to create clusters as recordings are inserted into the database but, even this type of clustering is not efficient. So, to cluster these recordings first these recordings are sent to rabbitMQ server and from that, these are sent to a clustering script which runs in a different container and runs continuously and clusters the incoming recordings. That way it does not slow down the process of submitting recordings to the database. For this I created PR #50.

Create endpoints to access MSIDs and MBIDs

I created two API endpoints in PR #51.One endpoint is to fetch MBIDs and MSID using an MSID. Another endpoint is to fetch MSIDs using an MBID. This way end users can access MBIDs and MSIDs which may be used for calculating different stats.

Apart from that with the help of my mentor, I did setup a VM to test the above code on the MsB datadump. This task had some challenges: first I had to create indexes for various fields to speed up the process of clustering. Without indexing, it would have taken approximately 37 days but after creating indexes on various fields It just took 3 hours. I found out that PostgreSQL does allow to create indexes on functions too which came into use while creating artist_credit clusters for which I created a custom function. Indexes were created in PR #53. When I ran the clustering code on a VM on which the whole MessyBrainz datadump was present I found out that we have fields in recording_json table which are supposed to store MBIDs but were pointing to empty strings. This was not supposed to happen initially as ListenBrainz is the only source of data for MessyBrainz currently. Submissions to MessyBrainz are restricted from users directly and ListenBrainz does validate listens for that. So, those recordings must have been inserted before that validation was present. To solve the problem I created PR #52.

The summer was a great learning experience for me. I started slowly as things were messy at the start. As at the start everything wasn’t crystal clear to me, I wasn’t sure on how exactly to write scripts that manipulate database and did write the scripts in the most trivial way possible. Here I was doing a query for every single MBID to first check if it’s present in the recording_cluster tables and if not then cluster the recording. Which is conceptually correct but not efficient by any means. And this could be done by executing a single query on the recording_json table to fetch only those recording MBIDs that are not present in recording_redirect table as those are unclustered. That way we don’t have to process the recording MBIDs that have been already processed making the process of clustering efficient.

With time I got an understanding of how clusters are created and how to handle anomalies. Such as James Morrison. In the end, the definition of anomaly can be put as an MSID represents an anomaly if it points to different MBIDs in entity_redirect table (entity can be artist_credit, recording, and release).

Work to be done ahead

The project is still in its initial stages and requires a lot of work to be done before moving it into production. We still need to write integration tests for ClusterWriter and API endpoints. After that, we can work on the Additional Ideas that I proposed in my proposal. We need to figure out some way to associate MBIDs to MSIDs for the artists, recordings, and releases where no MBIDs are present. This does not seem like a trivial task with so many anomalies to take care of.

Last three months have been a great experience for me. I would like to thank Robert Kaye, Param Singh, and Alastair Porter who helped me to solve a lot of problems that I encountered during the entire period. Working on their suggestions and reviews I was able to write good quality code which was efficient as well. The work culture at MetaBrainz inspired me a lot. At MetaBrainz we have weekly IRC meetings where we get to know what others are doing at the organization and also get a place to tell what we did in our past week. I would like to thank MetaBrainz and Google for giving me this chance to get involved in open source on such a cool project. The association of MSIDs to MBIDs can be used by ListenBrainz as stats are calculated on MSIDs which can then be mapped onto MBIDs which represents clean metadata. I would like to work on the project further because of the learning opportunities that are present in the project.

Our next major challenge: Fixing the MusicBrainz site design for an improved user experience

Back in 1998 when I started playing with Perl and wrote the CD Index (the pre-cursor to MusicBrainz). I was learning web development and had little understanding of web design. The tools I was using were primitive at the time and the results were cringeworthy and have not withstood the test of time.

Fast forward some 18 years and we’ve arrived at the current MusicBrainz site design — there have been minor facelifts over time and a bigger one once we released NGS back in 2011. But really, the site design hasn’t changed much and we’ve kept gluing features and new bits of data onto the crappy design, leaving us with the current mess of a UX experience we know as the modern MusicBrainz.

Our community has been asking us to improve UX for a long time — we need to:
Empower our community with better tools for developing, editing, viewing the magnificent data that we have.
Build a stronger foundation for further development, interaction, and extension of our projects in future
Make our projects more welcoming to newcomers, by lowering the learning curve as well as keeps the workflow of an advanced editor intact.

Fortunately for us, Chhavi [a design student from IIT, India] has become an active contributor to the MetaBrainz projects. She has been studying our sites and how we work as a team and has volunteered to drive the process to fix the UI and the user experience issues on the MusicBrainz site. She has proposed a part of this work as her Google Summer of Code project.

Our overall goal as a team is to create a design system which will help the designers and developers stay in sync, give a more unified theme to our projects, and make it easier for new contributors to join our projects. This will also make it much easier for our developers to address your requests for features/bug fixes faster in the future.

We are not barging into your online lives and trying to make our sites pretty — instead, we are focusing on the real experiences you have with them. We held long detailed conversations during our last summit in Barcelona, where Chhavi was also present and discussed a lot of concerns that might be running in your head while you read this.  As part of this initiative, we have been interviewing a number of key members of our project to understand what we and our users really need from this revamp. We have also kept track of community discussions around this topic. From this we decided that our users fall into three broad categories:

  1. There are those who contribute to code and understand database tech.

  2. Experienced/advanced MusicBrainz editors who don’t understand database tech.

  3. New users, who feel hopelessly lost in the current scenario.

To make all this research/discussion/feedback available for everyone to go through, we have started a Jira issue type Design that tracks all the design related tickets of MusicBrainz. The most notable tickets that show mock-ups of future MusicBrainz pages include:

When you look at these pages, please keep in mind that we’re trying to clean up the clutter and to make things simple and clean. Easier to understand for an experienced editor or a new one. The data that we have should be presented in a way that makes sense. The data should present the gaps and holes that it presently has, for people to be able to improve the data gaps. Data should also be our binding link to exploit the full potential of the projects that we have, such as ListenBrainz or CritiqueBrainz.

We are not trying to fluff things up and make them look pretty. Prettiness might come with the simplicity that we are chasing. Having user flows that do not hamper the speed and makes our life easier, is our utmost goal.

That said, we are happy to receive feedback on the upcoming designs as well as the process– if you have any, please post your comments to the appropriate tickets in Jira that we linked above. We’re currently getting some pressing dev tasks out of the way before we start the actual implementation of the redesigned project. Once our team is ready to work on this, we will public more blog posts about how this project will unfold and how it will impact our users.

 

500 Commits of Summer: My story of FOSS and GSoC

This story of summer started in a dull grey winter at home. Bored, I started lingering around IRC channels and much like the Alice of Wonderland, stumbled into the wonderful world of FOSS. Little did I know, it was gonna be one of the best things that happened to me. Below is a story of bugs, PRs , repos, commits, and some more commits. But it is also a story of curiosity, learning, frustrations (a lot of it), resilience (more than you think) and some amazing amazing people of the community. If I have to sum it up for you, I couldn’t think of a way better than this. So here it goes…

Disclaimer: It was a long journey, hence the long blog. Continue reading “500 Commits of Summer: My story of FOSS and GSoC”

GSoC 2017: Rating System in CritiqueBrainz

Hello!

I am Pinank Solanki, an undergrad at Indian Institute of Technology (IIT) Mandi, India. I worked with the MetaBrainz Foundation on one of its projects, as part of the Google Summer of Code 2017 over the last summer. It was one of the best and exciting summers I ever had.

Let me begin from the beginning. I first came to know about MusicBrainz in January and first contacted the community in February and was immediately hooked. Initially I decided to make a proposal for addition of book reviews for CritiqueBrianz, but it was not possible because the BookBrainz web service was unstable and the CritiqueBrainz’s host didn’t have direct access to BookBrainz database. So I tried to pitch my own ideas. But then, in one of the weekly meetings, I saw great support and enthusiasm among the community members for rating system for reviews —and I personally liked the idea of the project and thought it would be a great addition to CritiqueBrainz. I submitted my proposal, got accepted and a treat to the friends was due!

Overview

The aim of the project was to add support for three types of reviews: text, rating, text+rating (CB supported only-text reviews).

The schema changes and data-access functions are completed and merged. The frontend part is mainly completed including the fundamental functionality along with additional features. It took a lot of time to select and modify the rating input plugin perfectly satisfying the project’s needs. There is still some work to be done, most of which is based on the rating scale conversion in db package. Similarly, most of the web service part is completed and is held up due to the rating scale conversion PR.

Implementation

Schema changes

The schema changes done are quite different than what was mentioned in the proposal. My mentor for the project, Roman Tsukanov (Gentlecat), recommended some changes which would make keeping track of revisions a lot easier. You can see the schema here and the PR here.

Data-access functions

By the time I started working on the project, CB has migrated off the ORM. So, I wrote raw SQL queries and its tests. See the PR here. The rating scale was decided to be 1-5 but for storage a scale of 0-100 is used just like MusicBrainz keeping the possibility of migration of ratings from MB to CB in mind (more info at CB-245). This part is covered in the PR here.

Changes in user-interface

This plugin is used for rendering the rating star icons. The code can be seen in this PR. See the images below to get a good idea about the implementation.

Write review page:

cb-write-review

Review page:

cb-review

Entity page:

cb-entity-page

Revision comparison:

cb-revision-comparison

Web service

All the functionalities added to CritiqueBrainz had to be implemented in the web service (API) as well. All three types of reviews and other features are now supported via the web service. See the PR here.

Documentation

The chief part of the documentation was to update the schema. Other than that, rating parameter and several notes were added to the API documentation. See the PR here.

Other PRs relevant to the project can be found here.

Future work

First of all, I will complete the leftover work. Web service and frontend PRs are dependent on the rating scale PR. Once it gets merged, it’s 2–3 days of work to complete the rest.

Other than that, I look forward to keep contributing to CritiqueBrainz and other MetaBrainz projects. I am sure many interesting ideas will be discussed at the annual MetaBrainz Summit in Barcelona.

Conclusion

It was quite an eventful summer and GSoC was the biggest of them. Thanks to Roman for his constant help and guidance over the entire summer and also to all the other community members. It was so cool to work on an open-source project and I would definitely suggest for any music and data lover to explore the MetaBrainz projects.

GSoC 2017: Hacking on ListenBrainz

Namaste!

I am Param Singh, an undergraduate at the National Institute of Technology, Hamirpur, India, and I worked on ListenBrainz over the summer as part of the Google Summer of Code program. I started contributing code to ListenBrainz in January 2017 and have been working on new features and bug fixes since. I’ll be writing about the work I did and my experience working on LB in this blog post.

After a few of my patches had made it in and I was comfortable with the ListenBrainz codebase (which was a really nice example of software architecture for me), I talked with the LB team about what possible contributions I could make over the summer, and we decided that a Google BigQuery based statistics system is something that would be useful to have in ListenBrainz after we release a beta and have listen data that is permanently archived. I made a proposal for adding statistics to ListenBrainz which got accepted! During the community bonding period, we decided to try to get a solid and stable beta of ListenBrainz released before starting with the relatively large code additions that would be required by my project proposal. We tracked issues that we wanted fixed before a release in the MetaBrainz ticket tracker here. This work of fixing release blocking issues went into the coding period and we decided to continue working on a solid beta instead of adding new features for the time being.

I started with fixing bugs and adding new features to get a beta released as soon as possible. Some cool stuff I worked on during this time was dockerizing MessyBrainz (see PR here), migrating the codebases of MessyBrainz and ListenBrainz to Python 3 (PRs here and here) and improving the startup resilience of various parts of ListenBrainz to make sure that the server is able to self-heal (partially) if some part of it like RabbitMQ goes down (ticket here).

Later on, I did a big refactor of the LB code so that adding new modules would be easier in the future (PR here). I also spent a lot of time fixing bugs in our listen deduplication. Relevant pull requests for this are here and here.

Another feature I added to ListenBrainz while working on the beta was incremental imports. Earlier, LB didn’t keep track of previous imports of a user and did a full Last.FM import every time. However, now we keep track of the last time each user imported listens and only import new data since then. The PR adding incremental imports is here.

My mentor, Robert Kaye (ruaok) set up a test instance of the ListenBrainz server that was used by the community and as the community kept throwing their data at us, bugs kept popping up. A particularly weird bug caused LB to lose data for users with special characters in their usernames. The PR to fix this took a lot of time to create.

We kept on fixing bugs for a long time and the biggest thing I took away from this period of GSoC was the Ninety-ninety rule: «The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.» This summer has drilled this into my mind.

As soon as the beta was released, I started with writing code for statistics, making schema changes (PR here) and adding some user stats (PRs here and here). I’ll be continuing on the stats work after Summer of Code. The basic foundation of stats is mostly done and soon I’ll start with showing statistics to the users.

By the end of the official GSoC coding period, I have made 266 commits in the ListenBrainz codebase and have opened a total of 111 pull requests. The current production ListenBrainz running on https://listenbrainz.org has 253 commits by me, most of which were made during the GSoC period.

Over the summer, I have fallen in love with the MetaBrainz community and have learned a lot of stuff. I’m really looking forward to adding more features to ListenBrainz soon, so that the data that the community is contributing becomes useful to everyone. I loved working on a really cool open-source project like ListenBrainz this summer and am very thankful to Google for providing me this opportunity. I would encourage everyone reading this to give the ListenBrainz beta a try and contribute to ListenBrainz if possible.

GSoC 2017: Directly accessing MusicBrainz DB in CritiqueBrainz

Hello, everyone! This summer was fantastic for me!
I’m Suyash Garg, an undergraduate at National Institute of Technology, Hamirpur and I participated in Google Summer of Code 2017 contributing code to CritiqueBrainz. Alastair Porter mentored me during this GSoC programme. This post summarizes my contributions to the project and experiences that I had throughout the summer.

I started contributing to CritiqueBrainz in January, 2017 and before the start of the SoC programme, I mainly worked on writing raw SQL for retrieving data from the CB database and replacing the ORM code (CB-230). Other than that I worked on issues like CB-120, CB-235 and other minor bugs and issues. They were my first proper contributions to the open source world. Thank you MetaBrainz!!

For the Google Summer of Code 2017, my project involved retrieving data related to various entities (release-groups, artists, releases, events and places) directly from the MusicBrainz database instead of querying the MusicBrainz web service (CB-231). This became necessary as some pages on CB required to fetch too much data and thus made many requests to the MB web service. These pages were taking a long time to load. Thus, by connecting directly to the database, we could reduce the load time of these pages.

Here is a summary of my contributions to the project during the summer:

Accessing the MusicBrainz database
New Infrastructure is allowing us to easily read data directly from the MusicBrainz database. For accessing the database in the development environment, another service running the MusicBrainz database was added which uses an existing Docker image which the MusicBrainz project was already using. This allowed us to share resources between projects. I worked on adding an option to download the database dumps and import the data into the database (see PR#523). Also, I added the service in CB docker-compose files and updated the documentation for setting up the development environment (see PR#115 and PR#92).
Fetching data using mbdata.models
After setting up the development environment, my mentor suggested to me to use the mbdata package for writing queries to fetch data from the database instead of writing raw SQL. I worked on retrieving information for the entity: places and added helpers for fetching the relationship information. Following that, I worked on retrieving information for other entities (release-groups, releases, events, and artists). Also, since SQLAlchemy makes lazy queries to the database, a number of queries were being issued to the database. This could slow things down as for each query it was going to require one trip to the SQL server (network trip in production). So, as suggested by my mentor, I also worked on reducing the number of queries made for fetching data related to each entity (see PR#135). For pages that made a number of requests to the web service, I made this PR#121 for fetching information related to multiple entities at the same time.
Testing
For testing, the database queries are mocked using the unittest.mock Python package. The tests added make sure that the code (serializing RowProxy objects to dictionaries, caching, etc.) works properly (see PR#134). Adding up a new service (as a separate Docker container) in the test environment and running tests was taking too much time (in creating the tables and truncating them). So as suggested by my mentors, mocking the database queries was the best option. Throughout my GSoC period, I learned how important it was to write tests (especially when you break things more when you fix something) and make them run fast. I learned that «If tests don’t run fast, they would be a distraction rather than a help» (quoting from the book “The Art of Agile Development” by James Shore).

Other than these, I also worked on some UI/UX issues, namely CB-80 (adding option to filter releases with reviews), CB-84 (ordering release groups according to release year) and CB-261 (authenticating requests to Spotify Web API). CB-130 (reviewing entities with MBID redirects – see PR#145) was also solved while solving a production server issue.

This summer was awesome for me. I learned a lot of new things and techniques for writing better code. Thanks to my mentors, Alastair Porter and Roman Tsukanov. Also, great thanks to the lovely MetaBrainz community and Google for this opportunity. I’m really looking forward to keep contributing to CritiqueBrainz and to dive into other MetaBrainz projects.

MetaBrainz has been accepted to Summer of Code 2017!

I’m pleased to announce that MetaBrainz has been accepted into the Google Summer of Code program for 2017!

If you are an eligible university student who would like to participate in Summer of Code and get paid to hack on a MetaBrainz project over the summer, take a look at our ideas page for 2017. If this sounds interesting, take a look at our getting started page.

We kindly ask that you carefully read the ideas page and the getting started page before you contact us for help!

Thanks and good luck applying!