GSoC 2018: SpamBrainz – Fighting spam in MusicBrainz using machine learning

Hi, I’m Leo and I spent my summer building and training SpamBrainz, our new solution to fighting spam in MusicBrainz. If you haven’t heard of SpamBrainz before it’s probably because it did not exist before this year’s Summer of Code.

For quite a while now the amount of spam in MusicBrainz has started to become a serious problem. Often this means editors are automatically created with descriptions that look not unlike the spam emails most of us get every day, promoting other websites and services.

During last year’s MetaBrainz Summit we discussed possible solutions to this and came up with the Spam Ninja system. Essentially this means that Soon™ there will be a group of editors that receive spam reports and have the ability to delete editors and entities that are nothing but spam.

Now with MusicBrainz having almost two million registered editors, could we really expect the Spam Ninjas to manually check every single one of them in addition to all the new registrations? Obviously not, and this is where SpamBrainz comes in.

SpamBrainz is a machine learning system that looks at all editors and decides whether or not it thinks they are spammers. If it thinks they are, it automatically notifies the spam ninjas who then decide whether or not SpamBrainz was correct.

What’s great about this system is that a human is guaranteed to look at any report and at no point does a computer decide that you’re a spammer and should be banned, because no one wants machines to run the world, right?

Building SpamBrainz

While most GSoC projects involve adding features to existing systems, SpamBrainz is something entirely new and I had not built anything on this scale before so I started out by doing tons of research.

When building a machine learning project you should always start by doing some good
old statistics first
and trying to figure out what matters about your data and how the
system could use it. I wrote a couple Jupyter notebooks (which are great for working with data) to do this.

As I was not working for MetaBrainz at the time and had to respect our privacy policy, I wrote a script to collect the most common values of a couple different editors, anonymize them and save them to a report. Using that data I could compare all spam and non-spam editors and decide upon a set of datapoints that would be useful for my machine learning model. Yvanzo then ran these on the live database and I could happily do my data analysis without compromising user privacy.

Next I built a pretty boring Flask-based API that would allow MusicBrainz to queue up editor analysis and training. Quite a few different MetaBrainz projects use Python and need to access the MusicBrainz database so a long time ago someone wise decided to move commonly used code into a repository called brainzutils-python. All I had to do was to add some code for accessing editor data through it.

In a surprise move by ruaok I was then hired by MetaBrainz as a contractor with a yearly salary of 100g of chocolate. I probably should have negotiated what kind of chocolate but what mattered most was that I could now work with user data without breaching our privacy policy.

But before I could build my Keras model I had to decide on a final set of input features and do write code for preprocessing the data. Only then could I finally get started building and testing models.

The current SpamBrainz state of the art model is Lodbrok which actually turned out to work really well, reaching a 99% accuracy in detecting spam while only mis‐classifying 0.2% of real users as spammers. Obviously the latter won’t be a problem because after all a Spam Ninja will still check these reports.

Future outlook

Now that GSoC is over I could just disappear with all the money and leave SpamBrainz in its current state but obviously that’s not what I am planning to do.

I would like to work with zas on getting it deployed along with the Spam Ninja system, improve the code documentation and try to tackle the remaining problem that is online learning (which as it turns out, isn’t as easy as I had thought).

With spam always evolving and spammers already moving to more sophisticated methods than just using editor biographies, I’d also look into building separate models for other entities.

After all SpamBrainz is just getting started and I’m very much looking forward to continuing our journey towards reducing the spam we all have to endure on MusicBrainz and other MetaBrainz projects.

GSoC 2018: More detailed integration of AcousticBrainz with MusicBrainz

Here comes an end to a fantastic summer for this year and time to wrap up my GSoC project which I have been working in for the last 3 months (the official GSoC coding period).

Hello people!!

I am Rashi Sah, an undergraduate student at the National Institute of Technology, Hamirpur, India. I have been working on a really cool AcousticBrainz project for MetaBrainz Foundation Inc. as a participant in Google Summer of Code ‘18. It has been an amazing experience and I’ve learned a lot over the summer, spending countless days and nights to successfully take the project to the stage of completion. I decided to contribute to MetaBrainz in late December, then spent some time understanding the codebase of the project and then began creating pull requests and pushing commits for many features, tasks and fixing bugs since January 2018. This blog post consists of my GSoC experience as a student and the work I’ve done for the program so far.

Before starting the GSoC program, I started looking for some good-first-bugs initially and found some tickets to work on. Then I talked to the AcousticBrainz community members and started contributing. I created some big PRs mostly for adding new features to AcousticBrainz. I also worked on many bug fixes which are already merged into the AcousticBrainz codebase. New feature additions PRs include AB-21, AB-98 and AB-298. In mid‐February, I started looking for a suitable idea to work on for GSoC program and to create a proposal for the same. As the month of March was approaching, I did a lot of proposal discussion with MetaBrainz community members especially with Alastair, AcousticBrainz project lead who has helped me a lot in reviewing and guiding me to improve my proposal to a better extent. Later April, my proposal for a more detailed integration of AcousticBrainz with MusicBrainz got accepted. In the community bonding period, I mostly tried to continue my work which I was already doing for the past 3–4 months.

Getting entity information from the MusicBrainz database

The first thing I worked on when the official GSoC coding period began was adding a way to directly access MusicBrainz database for different entities to the MusicBrainz database module in BrainzUtils (a Python utility for all of our MetaBrainz projects). I worked on getting artist and release entity information from the MusicBrainz database via a direct connection. (See PRs BU-13 and BU-14.) Later, I worked on setting up the MusicBrainz server by adding a service in AcousticBrainz’s docker-compose files allowing us to easily read data directly from the MusicBrainz database in AcousticBrainz (PR AB-334). Our major aim of the project was to implement both the methods of MusicBrainz database access in AcousticBrainz especially importing the MusicBrainz database in AcousticBrainz from scratch and then to decide which methods works better while implementing a particular functionality in AcousticBrainz using MusicBrainz data.

Import the MusicBrainz data in AcousticBrainz database

MusicBrainz’s database contains a huge number of tables, but I analysed the use case of MB data in AB and made a list of those tables that we would actually require in our AcousticBrainz integrations. Then I made a PR (AB-338) for creating new tables in the AB database under the MusicBrainz schema. Later, I worked on a big PR (AB-340) which imports MB data corresponding to each and every recording present in AcousticBrainz’s database and writes the data into the tables of the MusicBrainz schema in AB. This PR was really huge and I had to take care of a lot of integrity constraints and foreign key dependencies.

Update MB data in AB for every new recording added to AB

Another feature I worked on after importing the MB data was updating the MB data present in AB whenever any new recording is added to the AcousticBrainz database (see PR AB-346) by importing the data from MB’s database via the direct connection. While working on a few bug fixes, I and my mentor, Param realized that the MB data import is taking a lot more time than expected when I applied the MusicBrainz importer script for full MB data dumps (of around 2.8 GB). So, I then worked on making the MusicBrainz importer more efficient and was able to import the data for few recordings within seconds (see PR AB-348). I had to figure out a lot for each table import and to detect the parts of the code which were making things slower.

To reduce the load on the processor, I included a sleep schedule of 5 seconds in the MusicBrainz importer module to wait before importing data for any new recording (see PR AB-354). During my GSoC period, I learned how important it is to write tests and make them run fast. I wrote tests for almost every script inside the db module. Later, I worked on writing tests for the MusicBrainz importer script (AB-352).

Apply replication packets to keep MB data in AB updated with the actual MusicBrainz database

Then came another tricky part of this project which was to update the MusicBrainz schema data in AB whenever there is any change in the actual MusicBrainz database whether it is an update or a deletion taking place. MusicBrainz provides hourly replication packets which describe the changes to the database in a specific period. Replication packets are .tar.bz2 archives with a collection of files in them which can be downloaded via the MetaBrainz API. Lukas Lalinsky, a long-time contributor to MetaBrainz projects, the founder of AcoustID and maintainer of the mbdata Python module, had worked on implementing replication packets on MB data. I did a lot of modifications in his script to apply replication packets to the MusicBrainz schema data till it’s recent update for the recordings data present in AcousticBrainz (see AB-350).

Integration with MB database: Use MBID redirect information to get original entity

After working on the direct connection and importing the MusicBrainz data, keeping it updated by all means, it was time to start working on writing evaluation scripts to decide the better method for any integration we apply in AcousticBrainz. I wrote a script to implement an integration in AB with MB database to use the redirect information of an entity and then returns the original entity corresponding to the MBID provided (see PR AB-356).

Evaluate both methods of MusicBrainz database access in AcousticBrainz

Now moving towards the last work of my GSoC period and the most important as well. After working on both the methods, we really needed to evaluate both in order to test which one is more efficient for any specific integration with the MB database. I first wrote an evaluation script which fetches the data from the recording and low-level tables. For this case, the difference between the time taken by both methods comes out to be really large (approx. 70 seconds for around 250+ recordings). So whenever we would have to get the data from local AB tables and MB tables as well, we would go for the import database method as this method turns out to be faster than the other one. Next I tested with the MBID redirect integration part in which I didn’t find much difference between both the methods (PR AB-357). But I ran these tests locally, the tests in production may yield different results.

All in all, it has been an exciting summer. By this time I am familiar with a very good part of the AcousticBrainz codebase. I really look forward to work on adding a lot more integrations with MB data in AcousticBrainz and plan to completely remove AB’s dependency over the web service to use the MusicBrainz database which would be very useful for the users.

Details of contributions made

By the end of the GSoC coding period, I have opened a total of 39 PRs of which 35 are pull requests to the AcousticBrainz server, 3 are pull requests to BrainzUtils and 1 pull request to the AcousticBrainz client and have made a total of 135 commits (109 in AB, 9 in BU, 3 in AC and 14 in AB master) and out of them, pull requests created and merged during the official GSoC coding period are PRs to AcousticBrainz server and PRs to Brainzutils.

These last three months were full of thrill, excitement and much frustration as well. And this doesn’t end here, I’d love to contribute in the future and act as a maintainer for the AcousticBrainz project. I believe people must try to contribute to open source organizations as it helps you learn and gain much experience in a short period of time especially when working for a great platform like Google Summer of Code.

I am really happy working with the awesome MetaBrainz community and the people here are fantastic. I’d love to stay being a part of MetaBrainz in future as well. So in the end a big thanks to my mentor Param Singh, without his help & support throughout the program, wouldn’t have been possible for me to reach the end phase of GSoC, and my organization admin Robert Kaye, AcousticBrainz project lead Alastair Porter and all of the MetaBrainz Foundation community members for choosing me as a GSoC student and thus providing me such a great opportunity and also for being very kind and helpful throughout the program. And I want to thank Google for making this all possible. Hope I get a chance to work with you all again!!

GSoC 2018: A way to associate listens with MBIDs

Hi, I’m Kartikeya Sharma, a postgrad student at National Institute of Technology, Hamirpur. I’ve worked on the project MessyBrainz as a student developer for GSoC 2018. Robert Kaye mentored me during this GSoC programme. The goal of my project is associating MBIDs to MSIDs and clustering together the MSIDs which represents the same MBID. The MBIDs represent MusicBrainz Identifier. It is an Universally Unique Identifier that is permanently assigned to each entity in the MusicBrainz database, MSID represents MessyBrainz Identifier which is associated with each unique recording, artist_credit and release in MessyBrainz database. In simple words MSIDs represents unclean metadata whereas MBIDs represent clean metadata.

This blog post summarizes the work that I did in my project, which was divided into three parts.

Processing the data already in MessyBrainz database

The first part involves creating clusters using the MBIDs already present in the MessyBrainz database. This involves creating clusters for recordings, artists, and releases. To implement this part I created the following three PRs #37, #41, and #44.

After that, I began to work on the second part of this which involves creating clusters using the artist MBIDs and release MBIDs and names fetched from MusicBrainz database. I needed to access MusicBrainz database, for that, I first had to work on BrainzUtils to have methods to access MusicBrainz database to fetch artist MBIDs using recording MBIDs and release name and release MBIDs using recording MBIDs. The part to fetch artist MBIDs was done during the community bonding period in PR #14 at BrainzUtils and to fetch releases I created PR #18 at BrainzUtils during GSoC coding period. After that, I created a PR to create clusters using the fetched artist MBIDs #47 and another one to create clusters using releases fetched #49.

I did write around 60 tests which proved to be vital in making sure that the code does what it’s supposed to do.

Processing the data as it is inserted into the MessyBrainz database

Creating clusters for the data inside the database requires a lot of resources. So, it was better to create clusters as recordings are inserted into the database but, even this type of clustering is not efficient. So, to cluster these recordings first these recordings are sent to rabbitMQ server and from that, these are sent to a clustering script which runs in a different container and runs continuously and clusters the incoming recordings. That way it does not slow down the process of submitting recordings to the database. For this I created PR #50.

Create endpoints to access MSIDs and MBIDs

I created two API endpoints in PR #51.One endpoint is to fetch MBIDs and MSID using an MSID. Another endpoint is to fetch MSIDs using an MBID. This way end users can access MBIDs and MSIDs which may be used for calculating different stats.

Apart from that with the help of my mentor, I did setup a VM to test the above code on the MsB datadump. This task had some challenges: first I had to create indexes for various fields to speed up the process of clustering. Without indexing, it would have taken approximately 37 days but after creating indexes on various fields It just took 3 hours. I found out that PostgreSQL does allow to create indexes on functions too which came into use while creating artist_credit clusters for which I created a custom function. Indexes were created in PR #53. When I ran the clustering code on a VM on which the whole MessyBrainz datadump was present I found out that we have fields in recording_json table which are supposed to store MBIDs but were pointing to empty strings. This was not supposed to happen initially as ListenBrainz is the only source of data for MessyBrainz currently. Submissions to MessyBrainz are restricted from users directly and ListenBrainz does validate listens for that. So, those recordings must have been inserted before that validation was present. To solve the problem I created PR #52.

The summer was a great learning experience for me. I started slowly as things were messy at the start. As at the start everything wasn’t crystal clear to me, I wasn’t sure on how exactly to write scripts that manipulate database and did write the scripts in the most trivial way possible. Here I was doing a query for every single MBID to first check if it’s present in the recording_cluster tables and if not then cluster the recording. Which is conceptually correct but not efficient by any means. And this could be done by executing a single query on the recording_json table to fetch only those recording MBIDs that are not present in recording_redirect table as those are unclustered. That way we don’t have to process the recording MBIDs that have been already processed making the process of clustering efficient.

With time I got an understanding of how clusters are created and how to handle anomalies. Such as James Morrison. In the end, the definition of anomaly can be put as an MSID represents an anomaly if it points to different MBIDs in entity_redirect table (entity can be artist_credit, recording, and release).

Work to be done ahead

The project is still in its initial stages and requires a lot of work to be done before moving it into production. We still need to write integration tests for ClusterWriter and API endpoints. After that, we can work on the Additional Ideas that I proposed in my proposal. We need to figure out some way to associate MBIDs to MSIDs for the artists, recordings, and releases where no MBIDs are present. This does not seem like a trivial task with so many anomalies to take care of.

Last three months have been a great experience for me. I would like to thank Robert Kaye, Param Singh, and Alastair Porter who helped me to solve a lot of problems that I encountered during the entire period. Working on their suggestions and reviews I was able to write good quality code which was efficient as well. The work culture at MetaBrainz inspired me a lot. At MetaBrainz we have weekly IRC meetings where we get to know what others are doing at the organization and also get a place to tell what we did in our past week. I would like to thank MetaBrainz and Google for giving me this chance to get involved in open source on such a cool project. The association of MSIDs to MBIDs can be used by ListenBrainz as stats are calculated on MSIDs which can then be mapped onto MBIDs which represents clean metadata. I would like to work on the project further because of the learning opportunities that are present in the project.

500 Commits of Summer: My story of FOSS and GSoC

This story of summer started in a dull grey winter at home. Bored, I started lingering around IRC channels and much like the Alice of Wonderland, stumbled into the wonderful world of FOSS. Little did I know, it was gonna be one of the best things that happened to me. Below is a story of bugs, PRs , repos, commits, and some more commits. But it is also a story of curiosity, learning, frustrations (a lot of it), resilience (more than you think) and some amazing amazing people of the community. If I have to sum it up for you, I couldn’t think of a way better than this. So here it goes…

Disclaimer: It was a long journey, hence the long blog. Continue reading “500 Commits of Summer: My story of FOSS and GSoC”

GSoC 2017: Rating System in CritiqueBrainz

Hello!

I am Pinank Solanki, an undergrad at Indian Institute of Technology (IIT) Mandi, India. I worked with the MetaBrainz Foundation on one of its projects, as part of the Google Summer of Code 2017 over the last summer. It was one of the best and exciting summers I ever had.

Let me begin from the beginning. I first came to know about MusicBrainz in January and first contacted the community in February and was immediately hooked. Initially I decided to make a proposal for addition of book reviews for CritiqueBrianz, but it was not possible because the BookBrainz web service was unstable and the CritiqueBrainz’s host didn’t have direct access to BookBrainz database. So I tried to pitch my own ideas. But then, in one of the weekly meetings, I saw great support and enthusiasm among the community members for rating system for reviews —and I personally liked the idea of the project and thought it would be a great addition to CritiqueBrainz. I submitted my proposal, got accepted and a treat to the friends was due!

Overview

The aim of the project was to add support for three types of reviews: text, rating, text+rating (CB supported only-text reviews).

The schema changes and data-access functions are completed and merged. The frontend part is mainly completed including the fundamental functionality along with additional features. It took a lot of time to select and modify the rating input plugin perfectly satisfying the project’s needs. There is still some work to be done, most of which is based on the rating scale conversion in db package. Similarly, most of the web service part is completed and is held up due to the rating scale conversion PR.

Implementation

Schema changes

The schema changes done are quite different than what was mentioned in the proposal. My mentor for the project, Roman Tsukanov (Gentlecat), recommended some changes which would make keeping track of revisions a lot easier. You can see the schema here and the PR here.

Data-access functions

By the time I started working on the project, CB has migrated off the ORM. So, I wrote raw SQL queries and its tests. See the PR here. The rating scale was decided to be 1-5 but for storage a scale of 0-100 is used just like MusicBrainz keeping the possibility of migration of ratings from MB to CB in mind (more info at CB-245). This part is covered in the PR here.

Changes in user-interface

This plugin is used for rendering the rating star icons. The code can be seen in this PR. See the images below to get a good idea about the implementation.

Write review page:

cb-write-review

Review page:

cb-review

Entity page:

cb-entity-page

Revision comparison:

cb-revision-comparison

Web service

All the functionalities added to CritiqueBrainz had to be implemented in the web service (API) as well. All three types of reviews and other features are now supported via the web service. See the PR here.

Documentation

The chief part of the documentation was to update the schema. Other than that, rating parameter and several notes were added to the API documentation. See the PR here.

Other PRs relevant to the project can be found here.

Future work

First of all, I will complete the leftover work. Web service and frontend PRs are dependent on the rating scale PR. Once it gets merged, it’s 2–3 days of work to complete the rest.

Other than that, I look forward to keep contributing to CritiqueBrainz and other MetaBrainz projects. I am sure many interesting ideas will be discussed at the annual MetaBrainz Summit in Barcelona.

Conclusion

It was quite an eventful summer and GSoC was the biggest of them. Thanks to Roman for his constant help and guidance over the entire summer and also to all the other community members. It was so cool to work on an open-source project and I would definitely suggest for any music and data lover to explore the MetaBrainz projects.

GSoC 2017: Hacking on ListenBrainz

Namaste!

I am Param Singh, an undergraduate at the National Institute of Technology, Hamirpur, India, and I worked on ListenBrainz over the summer as part of the Google Summer of Code program. I started contributing code to ListenBrainz in January 2017 and have been working on new features and bug fixes since. I’ll be writing about the work I did and my experience working on LB in this blog post.

After a few of my patches had made it in and I was comfortable with the ListenBrainz codebase (which was a really nice example of software architecture for me), I talked with the LB team about what possible contributions I could make over the summer, and we decided that a Google BigQuery based statistics system is something that would be useful to have in ListenBrainz after we release a beta and have listen data that is permanently archived. I made a proposal for adding statistics to ListenBrainz which got accepted! During the community bonding period, we decided to try to get a solid and stable beta of ListenBrainz released before starting with the relatively large code additions that would be required by my project proposal. We tracked issues that we wanted fixed before a release in the MetaBrainz ticket tracker here. This work of fixing release blocking issues went into the coding period and we decided to continue working on a solid beta instead of adding new features for the time being.

I started with fixing bugs and adding new features to get a beta released as soon as possible. Some cool stuff I worked on during this time was dockerizing MessyBrainz (see PR here), migrating the codebases of MessyBrainz and ListenBrainz to Python 3 (PRs here and here) and improving the startup resilience of various parts of ListenBrainz to make sure that the server is able to self-heal (partially) if some part of it like RabbitMQ goes down (ticket here).

Later on, I did a big refactor of the LB code so that adding new modules would be easier in the future (PR here). I also spent a lot of time fixing bugs in our listen deduplication. Relevant pull requests for this are here and here.

Another feature I added to ListenBrainz while working on the beta was incremental imports. Earlier, LB didn’t keep track of previous imports of a user and did a full Last.FM import every time. However, now we keep track of the last time each user imported listens and only import new data since then. The PR adding incremental imports is here.

My mentor, Robert Kaye (ruaok) set up a test instance of the ListenBrainz server that was used by the community and as the community kept throwing their data at us, bugs kept popping up. A particularly weird bug caused LB to lose data for users with special characters in their usernames. The PR to fix this took a lot of time to create.

We kept on fixing bugs for a long time and the biggest thing I took away from this period of GSoC was the Ninety-ninety rule: «The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.» This summer has drilled this into my mind.

As soon as the beta was released, I started with writing code for statistics, making schema changes (PR here) and adding some user stats (PRs here and here). I’ll be continuing on the stats work after Summer of Code. The basic foundation of stats is mostly done and soon I’ll start with showing statistics to the users.

By the end of the official GSoC coding period, I have made 266 commits in the ListenBrainz codebase and have opened a total of 111 pull requests. The current production ListenBrainz running on https://listenbrainz.org has 253 commits by me, most of which were made during the GSoC period.

Over the summer, I have fallen in love with the MetaBrainz community and have learned a lot of stuff. I’m really looking forward to adding more features to ListenBrainz soon, so that the data that the community is contributing becomes useful to everyone. I loved working on a really cool open-source project like ListenBrainz this summer and am very thankful to Google for providing me this opportunity. I would encourage everyone reading this to give the ListenBrainz beta a try and contribute to ListenBrainz if possible.

GSoC 2017: Directly accessing MusicBrainz DB in CritiqueBrainz

Hello, everyone! This summer was fantastic for me!
I’m Suyash Garg, an undergraduate at National Institute of Technology, Hamirpur and I participated in Google Summer of Code 2017 contributing code to CritiqueBrainz. Alastair Porter mentored me during this GSoC programme. This post summarizes my contributions to the project and experiences that I had throughout the summer.

I started contributing to CritiqueBrainz in January, 2017 and before the start of the SoC programme, I mainly worked on writing raw SQL for retrieving data from the CB database and replacing the ORM code (CB-230). Other than that I worked on issues like CB-120, CB-235 and other minor bugs and issues. They were my first proper contributions to the open source world. Thank you MetaBrainz!!

For the Google Summer of Code 2017, my project involved retrieving data related to various entities (release-groups, artists, releases, events and places) directly from the MusicBrainz database instead of querying the MusicBrainz web service (CB-231). This became necessary as some pages on CB required to fetch too much data and thus made many requests to the MB web service. These pages were taking a long time to load. Thus, by connecting directly to the database, we could reduce the load time of these pages.

Here is a summary of my contributions to the project during the summer:

Accessing the MusicBrainz database
New Infrastructure is allowing us to easily read data directly from the MusicBrainz database. For accessing the database in the development environment, another service running the MusicBrainz database was added which uses an existing Docker image which the MusicBrainz project was already using. This allowed us to share resources between projects. I worked on adding an option to download the database dumps and import the data into the database (see PR#523). Also, I added the service in CB docker-compose files and updated the documentation for setting up the development environment (see PR#115 and PR#92).
Fetching data using mbdata.models
After setting up the development environment, my mentor suggested to me to use the mbdata package for writing queries to fetch data from the database instead of writing raw SQL. I worked on retrieving information for the entity: places and added helpers for fetching the relationship information. Following that, I worked on retrieving information for other entities (release-groups, releases, events, and artists). Also, since SQLAlchemy makes lazy queries to the database, a number of queries were being issued to the database. This could slow things down as for each query it was going to require one trip to the SQL server (network trip in production). So, as suggested by my mentor, I also worked on reducing the number of queries made for fetching data related to each entity (see PR#135). For pages that made a number of requests to the web service, I made this PR#121 for fetching information related to multiple entities at the same time.
Testing
For testing, the database queries are mocked using the unittest.mock Python package. The tests added make sure that the code (serializing RowProxy objects to dictionaries, caching, etc.) works properly (see PR#134). Adding up a new service (as a separate Docker container) in the test environment and running tests was taking too much time (in creating the tables and truncating them). So as suggested by my mentors, mocking the database queries was the best option. Throughout my GSoC period, I learned how important it was to write tests (especially when you break things more when you fix something) and make them run fast. I learned that «If tests don’t run fast, they would be a distraction rather than a help» (quoting from the book “The Art of Agile Development” by James Shore).

Other than these, I also worked on some UI/UX issues, namely CB-80 (adding option to filter releases with reviews), CB-84 (ordering release groups according to release year) and CB-261 (authenticating requests to Spotify Web API). CB-130 (reviewing entities with MBID redirects – see PR#145) was also solved while solving a production server issue.

This summer was awesome for me. I learned a lot of new things and techniques for writing better code. Thanks to my mentors, Alastair Porter and Roman Tsukanov. Also, great thanks to the lovely MetaBrainz community and Google for this opportunity. I’m really looking forward to keep contributing to CritiqueBrainz and to dive into other MetaBrainz projects.