GSoC 2019: Support for Reviewing and Rating More Entities on CritiqueBrainz

Hello everybody! My name is Shamroy Pellew, and I am a rising sophomore at SUNY Buffalo.

This summer, as part of Google Summer of Code, I collaborated with the MetaBrainz Foundation on CritiqueBrainz, the foundation’s archive of user‐written music reviews. I have accomplished much in these past four months, and it has been a great experience working under the guidance of my mentor, Suyash Garg. Even though there is still some work to be done, most of the code I wrote has either been merged or is in code review, and I believe it is safe to say I achieved the goal of my original proposal.

Proposal

I initially planned to use the mbdata package to query the MusicBrainz database for information regarding artists, labels, recordings, and works, so I can achieve my goal of supporting reviews for these entities on CritiqueBrainz. However, I soon discovered that there exists BrainzUtils, a Python package with “common utilities used throughout MetaBrainz projects.” So it was decided that it would be best to use those utilities, instead of writing my own. Of course, a few changes had to be made. CritiqueBrainz had features that BrainzUtils was missing, so those had to be moved over and merged. The inclusion of BrainzUtils was the only real divergence between my original proposal and my actual course of action. Otherwise, everything went according to plan.

Phase 1

Adapting CritiqueBrainz code to be used in BrainzUtils was a bit of a learning curve, and took up a good majority of the first phase. I had to gain familiarity with both code bases and the difference between Python 2 and 3. I also had to write some new unit tests, to ensure everything was functioning as it should, which I’ve never done in Python before. The existing BrainzUtils code and feedback from my mentor were a great help though.

Here are the merged pull requests for this phase:

Phase 2

After I finished moving features to BrainzUtils, but before I could add support for reviewing new entities, I had to convert the existing CritiqueBrainz functionality to use BrainzUtils for data retrieval. This was a simple change, as the same code was being used, but from a different source. Once that was done, I moved on and began to work on the reviewal of new entities.

Here are the merged pull requests for this phase:

Phase 3

Adding support for reviewing of new entity types required the same simple steps for each new type. First, the new types were each added to the existing SQL script which declares entity types, and for each new type, an ALTER script was made. Then, I retrieved information about each entity through BrainzUtils, including any necessary supplementary data. The searching for the new entity types also had to be implemented, using musicbrainzngs, a Python binding for the MusicBrainz web API. So, I wrapped the musicbrainzngs searching API call in a function and created new HTML templates, using Jinja, for finding the new entities. Finally, I had to enable reviews for the new entity types. I edited the list of reviewable entity types and the existing review templates to include data about the new types.

Naturally, by this point in the project, a few bugs had popped up. There were problems with handling deleted entities, some with data not being displayed, and even cases where data was completely missing. These were solved as they appeared, and were only minor headaches.

Here are the merged pull requests for this phase:

Overall, there was also some human error on my part that slowed things down. I could have communicated more effectively and delivered each task piece by piece, which would have resulted in better feedback from my mentor.

Conclusion

In total, I have opened a total of 17 pull requests across BrainzUtils and CritiqueBrainz. If I had more time, though I would have liked to work on my stretch goal of incorporating entity ratings from MusicBrainz into CritiqueBrainz. Although I did manage to open a BrainzUtils pull request for serializing the MusicBrainz ratings when fetching information, I did not get a chance to do anything with this data.

I’d like to thank the MetaBrainz Foundation for this amazing opportunity. Thanks to the team and thanks to Google, I was able to produce something that people everywhere will be able to use. I learned a lot about open source this summer, and I was able to polish up on my Python skills. I’m looking forward to continuing work on CritiqueBrainz and the continued support from the MetaBrainz team!

Delhi Mini-Summit 2018

Rob, Suyash, Param and I met in the bustling city of Delhi where “horns are applied very liberally” (it is a very noisy city!) for a mini summit. Some may even call it elaborate break-out sessions on ListenBrainz and CrtiqueBrainz. We had discussions over a span of two days over laptops and notebooks, riding on bumpy roads in tuk-tuks and over spicy chicken biryanis. Here is a summary of all that we discussed:

ListenBrainz
Data Visualizations
We started Day 1 with graphs for ListenBrainz. After a long marathon of heavy development weightlifting tasks by Param and Rob (how do we work with BigQuery correctlty?), we are finally at a stage, where we can have some really cool amazing visualizations out of our dataset. What will they be? Where will they be? How will we implement them? Can our community pitch in with requests and maybe even play around with code?

After scrounging through a lot of other websites which do music-y data visualizations, and the few responses on our user survey, we started listing various ideas, and went through ideas on our community forum. We ended up dividing the data visualizations (from now on, called graphs) into two categories:

User specific graphs: showcasing a user’s listening history and taste
Site-wide graphs: showcasing the overall listening patterns on ListenBrainz

We had to make some tricky calls based on technical constraints, but overall, for starters, we decided some cool user graphs. We have detailed 6 of them over the summit:

  1. Listening history of a user: how much have you listen-ed, what you have you listened too, listen counts, etc
  2. Your top artitsts
  3. Your tracklist (listen history)
  4. How much music did you explore
  5. Which artists are trending in what parts of the worlds
  6. Listener count across the world

All these graphs will be available over different time durations (last week, month, year) and will also have handles to manipulate them. They will also have tools to easily share them on social media networks. We think, our community will really enjoy tracking their listening history with these. We also discussed a few ideas of how we can create a sandbox so our community can pitch in with ideas, vote on ideas and send pull requests for new graphs. More on that later, as we get there!

Rating System
If you are listening to a tracklist while working over something, how possible it is that you will rate a track saying “This is 3.5? This is 4.2? That is 5 stars!” So you see, ratings on ListenBrainz are tricky. It is very dynamic and interactive in real time, unlike other dear *Brainz projects, so we think that a Last.fm-like rating i.e like and dislike makes sense for ListenBrainz. There was also some discussion about where the ratings should reside — is CritiqueBrainz the correct place?

Home Page
We worked on redesigning the “My Listens” page as well the home page. We now plan to include, apart from the graphs, an infographic explaining how ListenBrainz works and things you can do with it! I will further detail out the mockup later this week.

Potential Roadmap
After almost two days of discussions, we could chalk up a rough roadmap for ListenBrainz, which include data visualizations, ability to rate/like tracks, create collections, follow users, and more. This also includes encouraging cross brainz pollination!

CritiqueBrainz
With Suyash around (he worked on Critique Brainz as part of GSoC last year, and has been actively involved since), there were obviously a lot of discussions on reinvigorating the project. We discussed quite a few ideas, which included innovating ways of writing and sharing reviews, sharing it on social media, cross *brainz interactions, a few UI changes, etc. We’re considering allowing Quick Reviews that, like Twitter, are limited to 280 characters. What do you think? Suyash has written down his ideas for the same and would love some feedback from the community!

MessyBrainz
With all these talks, a critical need to build some matching and clustering infrastructure was highlighted. Rob has written a possible roadmap for the project trying to compose his thoughts!

And of course! We couldn’t let Rob’s first visit to India be all about work. After the sunset, we went exploring the city of Delhi. That included rides in tuk-tuks, spicy chicken biryanis, shopping for some colorful clothes and definetly, the Indian chaat 🙂

All in all, it was a very productive mini summit and definitely made us all, more excited to start working on the ideas we discussed. We will keep you updated and post more soon!

food-01.jpg
Some A lot of Indian food!
IMG_20180322_211308.jpg
The troope at India Gate
IMG_20180323_195125.jpg
Param is really into (a lot of) selfies.

GSoC 2017: Rating System in CritiqueBrainz

Hello!

I am Pinank Solanki, an undergrad at Indian Institute of Technology (IIT) Mandi, India. I worked with the MetaBrainz Foundation on one of its projects, as part of the Google Summer of Code 2017 over the last summer. It was one of the best and exciting summers I ever had.

Let me begin from the beginning. I first came to know about MusicBrainz in January and first contacted the community in February and was immediately hooked. Initially I decided to make a proposal for addition of book reviews for CritiqueBrianz, but it was not possible because the BookBrainz web service was unstable and the CritiqueBrainz’s host didn’t have direct access to BookBrainz database. So I tried to pitch my own ideas. But then, in one of the weekly meetings, I saw great support and enthusiasm among the community members for rating system for reviews —and I personally liked the idea of the project and thought it would be a great addition to CritiqueBrainz. I submitted my proposal, got accepted and a treat to the friends was due!

Overview

The aim of the project was to add support for three types of reviews: text, rating, text+rating (CB supported only-text reviews).

The schema changes and data-access functions are completed and merged. The frontend part is mainly completed including the fundamental functionality along with additional features. It took a lot of time to select and modify the rating input plugin perfectly satisfying the project’s needs. There is still some work to be done, most of which is based on the rating scale conversion in db package. Similarly, most of the web service part is completed and is held up due to the rating scale conversion PR.

Implementation

Schema changes

The schema changes done are quite different than what was mentioned in the proposal. My mentor for the project, Roman Tsukanov (Gentlecat), recommended some changes which would make keeping track of revisions a lot easier. You can see the schema here and the PR here.

Data-access functions

By the time I started working on the project, CB has migrated off the ORM. So, I wrote raw SQL queries and its tests. See the PR here. The rating scale was decided to be 1-5 but for storage a scale of 0-100 is used just like MusicBrainz keeping the possibility of migration of ratings from MB to CB in mind (more info at CB-245). This part is covered in the PR here.

Changes in user-interface

This plugin is used for rendering the rating star icons. The code can be seen in this PR. See the images below to get a good idea about the implementation.

Write review page:

cb-write-review

Review page:

cb-review

Entity page:

cb-entity-page

Revision comparison:

cb-revision-comparison

Web service

All the functionalities added to CritiqueBrainz had to be implemented in the web service (API) as well. All three types of reviews and other features are now supported via the web service. See the PR here.

Documentation

The chief part of the documentation was to update the schema. Other than that, rating parameter and several notes were added to the API documentation. See the PR here.

Other PRs relevant to the project can be found here.

Future work

First of all, I will complete the leftover work. Web service and frontend PRs are dependent on the rating scale PR. Once it gets merged, it’s 2–3 days of work to complete the rest.

Other than that, I look forward to keep contributing to CritiqueBrainz and other MetaBrainz projects. I am sure many interesting ideas will be discussed at the annual MetaBrainz Summit in Barcelona.

Conclusion

It was quite an eventful summer and GSoC was the biggest of them. Thanks to Roman for his constant help and guidance over the entire summer and also to all the other community members. It was so cool to work on an open-source project and I would definitely suggest for any music and data lover to explore the MetaBrainz projects.

GSoC 2017: Directly accessing MusicBrainz DB in CritiqueBrainz

Hello, everyone! This summer was fantastic for me!
I’m Suyash Garg, an undergraduate at National Institute of Technology, Hamirpur and I participated in Google Summer of Code 2017 contributing code to CritiqueBrainz. Alastair Porter mentored me during this GSoC programme. This post summarizes my contributions to the project and experiences that I had throughout the summer.

I started contributing to CritiqueBrainz in January, 2017 and before the start of the SoC programme, I mainly worked on writing raw SQL for retrieving data from the CB database and replacing the ORM code (CB-230). Other than that I worked on issues like CB-120, CB-235 and other minor bugs and issues. They were my first proper contributions to the open source world. Thank you MetaBrainz!!

For the Google Summer of Code 2017, my project involved retrieving data related to various entities (release-groups, artists, releases, events and places) directly from the MusicBrainz database instead of querying the MusicBrainz web service (CB-231). This became necessary as some pages on CB required to fetch too much data and thus made many requests to the MB web service. These pages were taking a long time to load. Thus, by connecting directly to the database, we could reduce the load time of these pages.

Here is a summary of my contributions to the project during the summer:

Accessing the MusicBrainz database
New Infrastructure is allowing us to easily read data directly from the MusicBrainz database. For accessing the database in the development environment, another service running the MusicBrainz database was added which uses an existing Docker image which the MusicBrainz project was already using. This allowed us to share resources between projects. I worked on adding an option to download the database dumps and import the data into the database (see PR#523). Also, I added the service in CB docker-compose files and updated the documentation for setting up the development environment (see PR#115 and PR#92).
Fetching data using mbdata.models
After setting up the development environment, my mentor suggested to me to use the mbdata package for writing queries to fetch data from the database instead of writing raw SQL. I worked on retrieving information for the entity: places and added helpers for fetching the relationship information. Following that, I worked on retrieving information for other entities (release-groups, releases, events, and artists). Also, since SQLAlchemy makes lazy queries to the database, a number of queries were being issued to the database. This could slow things down as for each query it was going to require one trip to the SQL server (network trip in production). So, as suggested by my mentor, I also worked on reducing the number of queries made for fetching data related to each entity (see PR#135). For pages that made a number of requests to the web service, I made this PR#121 for fetching information related to multiple entities at the same time.
Testing
For testing, the database queries are mocked using the unittest.mock Python package. The tests added make sure that the code (serializing RowProxy objects to dictionaries, caching, etc.) works properly (see PR#134). Adding up a new service (as a separate Docker container) in the test environment and running tests was taking too much time (in creating the tables and truncating them). So as suggested by my mentors, mocking the database queries was the best option. Throughout my GSoC period, I learned how important it was to write tests (especially when you break things more when you fix something) and make them run fast. I learned that «If tests don’t run fast, they would be a distraction rather than a help» (quoting from the book “The Art of Agile Development” by James Shore).

Other than these, I also worked on some UI/UX issues, namely CB-80 (adding option to filter releases with reviews), CB-84 (ordering release groups according to release year) and CB-261 (authenticating requests to Spotify Web API). CB-130 (reviewing entities with MBID redirects – see PR#145) was also solved while solving a production server issue.

This summer was awesome for me. I learned a lot of new things and techniques for writing better code. Thanks to my mentors, Alastair Porter and Roman Tsukanov. Also, great thanks to the lovely MetaBrainz community and Google for this opportunity. I’m really looking forward to keep contributing to CritiqueBrainz and to dive into other MetaBrainz projects.