GSoC 2020: Spam detection with online learning

Introduction

Hello Everyone!!

I am Rohit Dandamudi, more commonly known as diru1100 in IRC and all other sites. I am currently doing my final year in Computer Science and Engineering at Chaitanya Bharathi Institute of Technology, Hyderabad. This summer, I had the wonderful opportunity to work with MetaBrainz Foundation and it’s my first time participating in GSoC. I worked on the SpamBrainz project under the guidance of yvanzo to make a step forward on eliminating spam in MusicBrainz.

How it started

I started looking for some cool projects to apply for GSoC, eventually, after going through some which were involved in the web development side, I finally got to know about the MetaBrainz Foundation, and it was already pretty late (around 2½ weeks before the proposal deadline), most of my fellow GSoCers were already in good rapport with the community by then. After looking through the project ideas, I wanted to do my project on CritiqueBrainz, but later I found out that it’s not considered for this year. In the end, I liked the concept of SpamBrainz and how it involves a good combination (Deep Learning and Web Development) of technologies. After browsing through the project I understood what I could and tried to make some changes to the codebase and was successfully able to run the model and add some documentation. Finally, I submitted the proposal, which got accepted.

The proposal

My proposal was focused on extending the work done by Leo as part of GSoC 2018. It mainly involved the following:

  • Do the research and implement online learning to:
    • Update the model dynamically as new variations of editor spam accounts appear.
    • Make the model self-sufficient without depending on a particular file or a batch of data.
    • Explore different types of learnings that are applicable to enhance LodBrok and for better performance in production.
  • Complete SpamBrainz API to:
    • Use and update the model with API calls.
    • Connect LodBrok with MusicBrainz Server.
  • Do detailed documentation to make the project more public and involve more contributors

Achievements

LodBrok model improvements

Research for model live update

SpamBrainz API

  • Incorporated the above research in SpamBrainz API, which consists of 2 endpoints, namely:
    • /predict to return classification results by LodBrok for the editor accounts
    • /train to retrain the model with incorrect results sent to SpamNinja respectively
  • After discussing with Leo, I decided to implement the API using Flask and Redis combination. Going with Redis over RabbitMQ for this API is feasible as the API is pretty lightweight and has at most 2 events.
  • Documented the entire API, with internal working, steps to replicate, and images to understand the results obtained.
  • Completed dockerization of SpamBrainz_API for easier integration and testing with MusicBrainz docker.
  • This diagram explains the current workflow of the implemented API:diagram explaining the current workflow of the implemented API

Challenges ahead and future of SpamBrainz

  • The API has to be integrated with MusicBrainz and should undergo more testing with real live data, currently, my focus is on this part.
    • Note: All the work done till now on the model was on dummy data generated by scripts that tend to replicate the real accounts as much as they can be, by taking into account the inputs from Freso, yvanzo, and the analysis done by Leo, without affecting the data privacy policy.
  • To extend online learning to other use cases in MetaBrainz through Transfer Learning and Online Transfer Learning.
  • Also looking forward to writing a research paper about the work done, and eventually publish it in IEEE transactions, as I plan on using SpamBrainz as my final year major project.

Special thanks to…

  • My mentor, YvanZo for being incredibly patient with me, helping me create quality commits, and overall making me a better programmer. Have always learned something new in every interaction with him.
  • LeoVerto, for helping me out whenever stuck and getting me up to date with the project.
  • MetaBrainz Foundation, for creating an open, inclusive, and productive environment to build some amazing stuff.

GSoC 2020: User Collection for BookBrainz

Hi everyone, I am Prabal Singh currently studying in Indian Institute of Technology, Guwahati. This summer I participated in Google Summer of Code and developed a new feature – User Collections – for the project BookBrainz.

I was mentored by Nicolas Pelletier (Mr_Monkey on IRC) during this period. This post summarizes my contributions to the project.

Continue reading “GSoC 2020: User Collection for BookBrainz”

GSoC 2020: Manage your listens better with ListenBrainz

Hey! My name is Shivam Kapila (shivam-kapila on IRC) and I am a final year undergrad at National Institute of Technology Hamirpur. I have been working on the ListenBrainz project this Summer as a participant of the Google Summer of Code program. The past four months were full of fun, hacking and loads of music!!

Landing into the MetaBrainz Community!

My journey with MetaBrainz began in late January this year, when I introduced myself to the community. My first PR improving the developer documentation was by adding parts connected with setting up the Spark infrastructure on a local setup along with consolidating and improving bits of documentation. I delved into real code while implementing front end components for Deleting Listens. Over the next few months, I fixed various bugs like making the Importer Modal responsive, fixing the DB setup scripts, fixing pagination issues while browsing listens, handling stat calculation errors in the Spark Reader and flushing user stats when they delete their listens.

As a GSoC applicant, I proposed to add various Listen Management features like love/hate (aka feedback) and deleting individual listens in ListenBrainz. I also proposed a new design for the Listens page. This involved a lot of designing and research, going through UI/UX design guidelines and tuning colors, shades and shadows till we arrived at a presentable and subtle design.

And finally I onboarded the GSoC train 🙂 .

Bonding with the community

I had been a part of the community since January so I was familiar with how things work in ListenBrainz. So I decided to contribute to the TimescaleDB migration where we moved our primary listen store from InfluxDB to TimescaleDB, opening up a ton of features for us to work on. Here is the final migration PR containing the commits of my contribution.

I also contributed to easing the testing infrastructure for devs to test the patches on their local setups. Following this I upgraded the postgres-client to PG12 version when we migrated to Postgres 12. I also fixed a minor font bug on the profile page.

The GSoC journey begins

Laying the base

As the official coding period began, I started working on my proposed tasks. The first question was: how to store the feedback? So I began implementing the database changes to store the recording feedback and applying the necessary changes in production. Following this I added a Python module to interact with the database and implemented a Pydantic model to validate the feedback records before they are stored in the database or served over the API. Then I added the necessary APIs to store and fetch the feedback for a given user or recording. This was followed by improving the efficiency of the DB module.

I also worked on dumping the recording feedback in the ListenBrainz public dumps. Since ListenBrainz had migrated the stats calculation infrastructure from Google BigQuery to Apache Spark I also removed the BigQuery references from the ListenBrainz website. Now that the timescale migration work became stable, I began working on Delete a Listen feature.

Pulling out the front end brushes

Now that the base was ready for us to work on, I started working on the React components so that the feedback and deletion feature could actually be presented on the website. Around the same time, the Timescale release day was also getting near, so I helped with a few tests and finished up the work for deleting listens. The front end components also started looking good and we were ready to associate the back end with them.

Rectifying & Reactifying

It’s high time and the final phase started. Now that we were ready with a few components we needed some tweaks in some production components to make them subtle. Hence I shot an improvement PR to tweak some shadows, adjust some fonts, adjust heights of the components, sticking the footer to the bottom, and reactify the loading spinner. Then came the Listen Count Card denoting the number of listens for a user. Following this we moved to Card based design for displaying listens.

This was followed by the much awaited feedback controls and now we can love/hate the songs from our listen collection. Isn’t this amazing! There were some needed minor tweaks needed to handle the ‘playing now’ listens correctly. At the same time, following the MetaBrainz guidelines to write quality code, I worked on making the SQL queries more readable. Then came the much awaited Delete a Listen feature and now we can finally get rid of the embarrassing listens!!

I also addressed some high priority tasks like giving the users an option to download their submitted feedback as JSON. We noticed some UI glitches and then came three back to back PRs to update feedback control shades, improving the listen time text and smoothing up the deletion animation. This is how the listen list looks like:

List of listens

What’s next??

Oh, now comes the time when we talk about the current scenario. The tasks currently on my radar are adding cover art support so that the page looks more alive and improving the Spotify imports to only import listens that were listened by the user after the latest Spotify listen we have for them.

After this I aim to work on the recommendation stuff that’s being actively pursued by the team. Also Mr_Monkey and me had been working on some design concepts for the All New ListenBrainz. I am pretty excited to work on it. Wanna take a sneak peek?

A new fam

The journey with MetaBrainz has been so amazing, that I am so tempted to stick here. I feel ecstatic to be a part of GSoC with the best org 🙂 . The best part is – it’s never all about code. There’s a lot to gain. Each day marked gaining maturity and thinking more and more like a real developer. I started feeling at ease with the communicate → code → integrate chain. It really feels fortunate to be a part of the MetaBrainz family where everyone is a ping away ❤ .

GSoC marks the kickstart of my journey with MetaBrainz and I will be here lurking on IRC, shooting PRs to make the projects more and more awesome.

Heartiest Gratitude

GSoC 2020: Adding Statistics and Graphs for ListenBrainz Users and Community

Hey everyone! I am Ishaan Shah (ishaanshah), a sophomore at International Institute of Information Technology – Hyderabad, India. This summer, I worked on ListenBrainz as a participant in Google Summer of Code ’20. My project involved generating statistics and visualisations for users using Apache Spark. This blog is an overview about the work I did and my experience working with ListenBrainz.

I started contributing to ListenBrainz in January 2020. My first PR was for LB-179, a small Quality of Life improvement to the LastFM importer. My first major contribution was porting the LastFM importer to ReactJS. Over the next two months, I continued working on the frontend, where I mainly worked on improving the frontend infrastructure by adding support for automated testing, porting the codebase to TypeScript and standardising the frontend code using ESLint and Prettier.

After making a few patches, I understood how ListenBrainz worked and got comfortable with the codebase. I decided to make a proposal for adding statistics to ListenBrainz using Apache Spark. While writing the proposal, I referred to many other websites, blogs, as well as community discussions for different ideas about statistics which could be added. After some research, I narrowed down on the specific graphs and statistics that I wanted to calculate during GSoC.

Community Bonding Period

Since I had been working with the MetaBrainz community since January, I was familiar with how things worked in the community. So we decided to use the Community Bonding Period for fixing and updating the Top Artists charts for a user. The first task that I took up was to add an API endpoint for fetching the Top Artists data for a user programmatically. Until then, I had mostly spent my time working on the frontend, this task helped me in getting familiar with the backend architecture. Next, I worked on porting the Top Artist graph from d3 to nivo – a charting library built with ReactJS and d3. The Top Artists graph only supported All Time statistics before. I worked on adding support for more time ranges. This was the first time I worked with Apache Spark and the PR for this took quite some time, but it was essential that we got it right as most of the statistics we built further would use a similar workflow. After we were satisfied with the overall flow of the data from our Spark cluster to the web server, I started working on showing the stats for different time ranges on the website. Although this task seemed easy at first, it took much longer than expected. We encountered some bugs and received some user feedback when we deployed the graph to production. The rest of this period was spent on incorporating the user feedback and fixing the bugs.

Top Artist shown on the Charts page
Top Artists

First Coding Period

We now had a somewhat stable pipeline for calculating the stats and sending them to the server. I started working on the backend for Top Releases stats for a user. We ran into memory issues when calculating these stats on the cluster, so I spent some time finding the cause of the issue and realised that we were collecting the results all at once which was causing the driver to run out of memory. I fixed this by collecting the results for each user separately and tweaking some RabbitMQ parameters to make sure that messages aren’t dropped while sending them to the server (PR #897). After this, I added Top Recordings for a user. Now we had a brand new Charts page that displayed the user’s Top Artists/Releases/Recordings for different time ranges. Next I started working on temporal statistics for a user i.e, number of listens in a past time range. The query that I wrote for calculating this data turned out to be pretty inefficient for larger datasets. So I ended up writing two versions of the same query: one for large datasets and one for smaller ones. While working on displaying these stats on the frontend, I tried various representations of the data. I finally settled on displaying the data as bar graphs, as shown on this report view.

Listening Activity shown on Reports page
Listening Activity

Second Coding Period

I added two more graphs in this period: Daily Activity and Artist Origins. The Daily Activity graph shows the number of listens a user has at a particular time of the day. I implemented the query for calculating this data in a slightly different way compared to the Listening Activity query. This change improved the query speed significantly. I had some trouble finding a correct way to represent this data. My mentor helped me in this by suggesting the usage of a Heatmap, and the results turned out to be pretty good.

Daily Activity shown on Reports page
Daily Activity

Next, we worked on the Artist Origins graph, which provides an insight into the geographical diversity of a user’s musical taste. I had a lot of help from the ListenBrainz team for this graph and I couldn’t have done this graph without their help. This was by far the most interesting stat that I worked on during the project. Furthermore it laid a general framework to calculate statistics using the data from MusicBrainz. After deploying this map on production, we received feedback from the users that the map looked plain for most of them and there wasn’t much colour difference between different regions. This happened because people generally tend to listen more songs from their home country, so there is a huge difference between the country with maximum artists and average number artists from other countries. We fixed this issue by changing the colour scale from linear to logarithmic.

Comparison between linear and log scale in Artist Origins Map

Final Coding Period

We now turned our attention towards calculating some stats for the whole website. We decided to make a graph for the Top Artists over different time ranges. We thought that this would be relatively easy given that we had already done something similar for individual users before. However we hit an unexpected bump; the data we were calculating was not accurate, mainly because of various different sources of the artists and some minor changes in the artists’ name or metadata resulted in a different entry with a different listen count for the same artist. Moreover, we found a couple of users spamming our website for self promotion and we did not have a solid way to deal with this. Around this time, my college resumed and the amount of time I could dedicate to LB reduced severely. So we decided to use the remaining time to work on improving the frequency at which stats are updated. I have an open PR (#1052) for doing this at the time of me writing this blog and we should be able to implement this functionality in the near future.

Artist Origins shown on reports page
Artist Origins

Experience

The past 4 months have taught me a lot of things. I learnt new technical concepts everyday. I started writing code as a developer rather than a programmer. I understood the importance of proper unit and integration testing (even though it was my least favourite part while adding a new functionality). I also found it much easier to talk and interact with people both online and in real life. Frequent deployments of new features to production helped us a lot. We were able to catch bugs when we still had some context over the code written and also received feedback from the users about how we could improve the new features added. It also kept me motivated to keep working on new graphs and statistics and gave me a sense of satisfaction when I saw them on the production server. I also learnt that things don’t always go the way we expect them to. More often than not, you will run into some bumps while adding new features so it is better to keep some extra time to deal with these issues.

GSoC gave me a wonderful opportunity to work with some amazing people from all over the globe. I was not able to complete all the graphs that I had planned for this summer, but I do plan to continue working on ListenBrainz to add more statistics and new features.

Special Thanks

  • Param Singh (iliekcomputers) for being an amazing mentor and helping me whenever I was stuck on an issue.
  • Robert Kaye (ruaok) for providing some really insightful feedback and the MusicBrainz data that was required for calculating the Artist Origin map.
  • Nicolas Pelletier (Mr_Monkey) for helping me with the frontend for the user Charts page and providing some amazing tips for ReactJS.

State of the Brainz: 2019 MetaBrainz Summit highlights

The 2019 MetaBrainz Summit took place on 27th–29th of September 2019 in Barcelona, Spain at the MetaBrainz HQ. The Summit is a chance for MetaBrainz staff and the community to gather and plan ahead for the next year. This report is a recap of what was discussed and what lies ahead for the community.

Continue reading “State of the Brainz: 2019 MetaBrainz Summit highlights”

GSoC 2019: Recording Similarity Indexing for AcousticBrainz

For Starters… Who Am I?

My name is Aidan Lawford-Wickham, better known as aidanlw17 on IRC, and I’m entering my second year of undergraduate study in Engineering Science at the University of Toronto. This summer, I had the opportunity to participate in my first Google Summer of Code with the MetaBrainz Foundation. Working on the AcousticBrainz project under the mentorship of Alastair Porter (alastairp), I used previous work on measuring track to track similarity as the basis for a similarity pipeline using the entire AB database.

How Did I Get Involved?

When I started applying for GSoC, I needed to find an organization that paired a challenging learning environment with a project of personal interest. Given my own passion for listening to music, playing music, and exploring its overlap with culture, MetaBrainz quickly became my top priority. I jumped on the #metabrainz IRC channel for the first time, and I’ve been active daily ever since!

From there, the whole community welcomed me with open arms and responded thoughtfully to my questions about setting up my local development environment. I made my first pull request for AcousticBrainz, AB-387, which added the ability to include dataset and class descriptions when importing datasets as CSV files. This allowed me to work alongside my soon-to-be mentor for the first time and further acquaint myself with the acousticbrainz-server source code.

I was excited about my first PR and wanted to contribute more. Not only was this a project related to my passions, but it had already begun to teach me about technologies that I hadn’t used before. I was struck by the possibility to contribute more, and work with great people on a non-profit, open source project. I quickly decided that MetaBrainz was the only place I would apply for GSoC and began to think about proposals. I read through the previous work on recording similarity done by Philip Tovstogan, which was based upon a PostgreSQL solution with shortcomings in terms of speed. With a strong supporting background, high community interest, and my own dreams of the possibilities to come from predicting similar tracks, I created a proposal to build a similarity pipeline using Spotify’s nearest neighbours library, Annoy. The timeline and tasks shown on the full proposal were adjusted throughout the summer, but the general objectives were maintained. Looking back on the summer now, the basic requirements for the project were as such:

  • Using the previous work, define metrics for measuring similarity that will translate recording features from the AB database into vectors. Compute and store these vectors for every recording in the database.
  • Create an Annoy index for each of these metrics, adding the metric’s vector for each recording to the index.
  • Develop methods of querying an index, such as outputting nearest neighbours (similar recordings) to a specific recording or many recordings, or finding the similarity between two recordings.
  • Allow users to query the indices via an API.
  • Create an evaluation that allows us to measure the success of our indices in the public eye, fine tune our parameters, and display index queries via a graphical user interface.

Community Bonding Period

After losing sleep before the announcement, and a huge sigh of relief on May 6th, I was ecstatic to get started.

There was plenty of required reading, and I familiarized myself with the different elements of building similarity into AB. After discussing with Rob (ruaok) and Alastair and cementing our decision to use Annoy as the nearest neighbours algorithm of choice, I took to reading through Annoy documentation and making a small implementation to grasp the concepts. Annoy works blazing fast, and uses small, static files – these are points that would prove advantageous for us in terms of querying indices many times, as quickly as possible. Static index files allow for them to be shared across processes and could potentially make them simple to redistribute to others in the future – a major benefit for further similarity research.

I studied Philip’s previous work, gained an understanding of the metrics he used in his thesis, and reimplemented all of his code to better grasp the concepts and use them as a basis for the summer. Much of Philip’s work was built to be easily expandable, and flexible to different types of metrics. Notably, when integrating it with a full pipeline including Annoy, priorities like speed meant that we lost some of this flexibility. I found this to be an interesting contrast between the code structure for an ongoing research purpose, and the code ready to be deployed in production on a website.

All the while, I kept a frequent dialogue with Alastair to gel as a team, clarify issues with the codebase, and further develop our plans for the pipeline. To build on my development skills, learn more about contributing guidelines and source control, and improve the site, I worked on some exciting PRs during the bonding period. Most notably, I completed AB-406 over a series of 3 PRs, which allowed us to introduce a submission offset column in the low-level table to handle multiple submissions of a single recording. This reduced the need for complexity in queries to the API, decreasing the load on the server. Additionally, I added some documentation related to contributions and created an API endpoint that would allow users to only select specific features rather than an entire low-level document for a recording – aiming at reducing server load.

Last but not least, I got really involved with the weekly meetings at MB! We have meetings every Monday on #metabrainz to give reviews of the last week, and discuss any other important community topics. I love this aspect of the community. Working remotely, it creates a strong team atmosphere and brings us all a bit closer together – even if we’re living time zones apart. During one meeting, we discussed whether or not past GSoC proposals should be available to students. What do you think? This prompted me to share my own experience with the application process at MetaBrainz and look into if/how we could improve it.

… And so it began, we dove into the first coding period.

The Key Components, a Deeper Look

Computing Similarity Metrics

Having explored the previous similarity work from Philip, I used his definitions of metric classes and focused on developing a script to compute metrics for each recording in the database incrementally. Recognizing that we would also need a method of computing metrics for a single recording on submission, I made this script as open ended as possible. After successfully computing all metrics for the first time, we went through an iterative process of altering the logic and methodology to dramatically improve its speed. Ultimately, we used a query to get the batch of low-level recordings that haven’t had similarity computations, complete with their low-level data and all high-level models. Though we revised and found bugs in this script time and time again, I’m confident in saying that with perseverance we finally got it working.

Prior to the beginning of the project I had limited experience working with SQL databases, and this objective pushed me to develop new ways to approach problems, and gave me a much deeper understanding of PostgreSQL.

Building Annoy Indices

With all that vectorized recording data from the metrics computation, nothing sounds better than adding it to an ultra-fast index built for querying nearest neighbours! Feeding the data into an index and watching it output similar recordings in milliseconds became the most satisfying feeling. The Annoy library is a platform for nearest neighbours of all sorts, and it is generally simple: define the index, add items with an identifier and a vector, built the index, save it for later use, load it up, and then use its built-in methods to query for similar items. Easy, right? The added challenge is making this interface with recordings from our database as items, and meeting our needs in terms of speed and alterability when new items are added. Annoy is built without checks in many places, and we required a custom cycle of building, loading, and saving indices to ensure they were operable for our purposes (once an index is built, new items may not be added). At this point, the index model is open to saving new indices with different parameters, which allows us to tune as we further develop the pipeline.

After wrapping the index in a class that interfaced with our needs, we added scripts to build all indices and save them, and scripts to remove indices if need be. Currently, the project has 12 indices, one for each metric in use:

  • MFCCs
  • Weighted MFCCs
  • GFCCs
  • Weighted GFCCs
  • BPM
  • Key
  • Onset Rate
  • Moods
  • Instruments
  • Dortmund
  • Rosamerica
  • Tzanetakis

API Endpoints

Making API endpoints available was a high priority activity and was an exciting aspect of the project since it would allow users to interact with the data provided by a similarity pipeline. Using the index model, I created three API endpoints:

  • Get the n most similar recordings to a recording specified by an (MBID, offset) combination.
  • Get the n most similar recordings to a number of recordings that are specified (bulk endpoint).
  • Get the distance between two recordings.

For each endpoint, a parameter indicates the metric in question, determining which index should be used. Currently, the endpoints also allow varying index parameters, such as the distance type (method of distance calculation) and number of trees used in building the index (precision increases with trees, while speed decreases).

A full explanation of the API endpoints is documented in the source code.

Baseline Evaluation

As I said, an index can be altered using multiple parameters that impact the build speed, query speed, and precision in finding nearest neighbours. Assessing the query results from our indices with public opinion is a top priority, since it gives us valuable data for understanding the quality of similarity predictions. With the evaluation we will be able to collect feedback from the community on a set of similar recordings – do they seem accurate, or should a recording have been more or less similar? What recording do you think is the most similar? With this sort of feedback, we can measure the success of different parameters for Annoy, eventually optimizing our results.

Moreover, this form of evaluation provides a graphical user interface to interact with similar recordings, as a user-friendly alternative to the API endpoints. Written using React, it feels snappy and fast, and I feel that it provides a pleasing display of similar recordings. At this point in the project I was glad to accept a frontend challenge which differed from the bulk of my work thus far.

Documentation and Project Links

Similarity pipeline related:

Additional work:

Going Forward

This summer allowed for us to build on previous similarity work to the point of developing a fast, full pipeline. At this point, there is still a vast amount of work to be continued on the pipeline and I am eager to see it through. In the upcoming year I plan to continue contributing to AcousticBrainz and the MetaBrainz Foundation as a whole. These are areas that I’m interested in continuing to develop for the recording similarity pipeline:

  • Parameter tuning on Annoy indices
  • Adding more metrics to cover other recording features
  • Adding support for hybrid metrics that consider multiple features (this was started by Philip and should be integrated to provide more holistic similarity)
  • Making indices available for offline use
  • Creating statistics and visualizations of vectors for each metric

Wrapping Up

To say the least, this has been a highly rewarding experience. MetaBrainz is a community full of extraordinary, thoughtful, and friendly developers and enthusiasts. I will be forever thankful for this opportunity and the lessons that I gained this summer. I am so excited to meet everyone at the summit this September! I’d like to personally thank my mentor, Alastair Porter (alastairp), for his perceptive guidance, his support, his friendship, and his own contributions to the project. Thanks to Robert Kaye (ruaok) for his support, thoughts, and enthusiasm towards this project, as well as for his dedication to MetaBrainz. Thanks to Google for making this all possible – SoC is a highly unique opportunity to learn about open source software and make new connections! Cheers.

GSOC 2019: Add Edit Previews to Non‐Release Entities in MusicBrainz

I am Anirudh Jain (Cyna on IRC), an undergraduate student at Bharati Vidyapeeth’s College of Engineering, New Delhi, India. I’ve been working on the MusicBrainz project of the MetaBrainz Foundation as a participant in Google Summer of Code 2019. This year marks the beginning of me as an Open Source developer. My work during the GSoC 2019 period can be found in my “temp” branch in my musicbrainz-server clone. The changes there will slowly get merged into the “cyna-gsoc” branch in the main musicbrainz-server repository on GitHub as they’re reviewed.

About the Project

Continue reading “GSOC 2019: Add Edit Previews to Non‐Release Entities in MusicBrainz”

GSoC 2019: Support for Reviewing and Rating More Entities on CritiqueBrainz

Hello everybody! My name is Shamroy Pellew, and I am a rising sophomore at SUNY Buffalo.

This summer, as part of Google Summer of Code, I collaborated with the MetaBrainz Foundation on CritiqueBrainz, the foundation’s archive of user‐written music reviews. I have accomplished much in these past four months, and it has been a great experience working under the guidance of my mentor, Suyash Garg. Even though there is still some work to be done, most of the code I wrote has either been merged or is in code review, and I believe it is safe to say I achieved the goal of my original proposal.

Proposal

I initially planned to use the mbdata package to query the MusicBrainz database for information regarding artists, labels, recordings, and works, so I can achieve my goal of supporting reviews for these entities on CritiqueBrainz. However, I soon discovered that there exists BrainzUtils, a Python package with “common utilities used throughout MetaBrainz projects.” So it was decided that it would be best to use those utilities, instead of writing my own. Of course, a few changes had to be made. CritiqueBrainz had features that BrainzUtils was missing, so those had to be moved over and merged. The inclusion of BrainzUtils was the only real divergence between my original proposal and my actual course of action. Otherwise, everything went according to plan.

Phase 1

Adapting CritiqueBrainz code to be used in BrainzUtils was a bit of a learning curve, and took up a good majority of the first phase. I had to gain familiarity with both code bases and the difference between Python 2 and 3. I also had to write some new unit tests, to ensure everything was functioning as it should, which I’ve never done in Python before. The existing BrainzUtils code and feedback from my mentor were a great help though.

Here are the merged pull requests for this phase:

Phase 2

After I finished moving features to BrainzUtils, but before I could add support for reviewing new entities, I had to convert the existing CritiqueBrainz functionality to use BrainzUtils for data retrieval. This was a simple change, as the same code was being used, but from a different source. Once that was done, I moved on and began to work on the reviewal of new entities.

Here are the merged pull requests for this phase:

Phase 3

Adding support for reviewing of new entity types required the same simple steps for each new type. First, the new types were each added to the existing SQL script which declares entity types, and for each new type, an ALTER script was made. Then, I retrieved information about each entity through BrainzUtils, including any necessary supplementary data. The searching for the new entity types also had to be implemented, using musicbrainzngs, a Python binding for the MusicBrainz web API. So, I wrapped the musicbrainzngs searching API call in a function and created new HTML templates, using Jinja, for finding the new entities. Finally, I had to enable reviews for the new entity types. I edited the list of reviewable entity types and the existing review templates to include data about the new types.

Naturally, by this point in the project, a few bugs had popped up. There were problems with handling deleted entities, some with data not being displayed, and even cases where data was completely missing. These were solved as they appeared, and were only minor headaches.

Here are the merged pull requests for this phase:

Overall, there was also some human error on my part that slowed things down. I could have communicated more effectively and delivered each task piece by piece, which would have resulted in better feedback from my mentor.

Conclusion

In total, I have opened a total of 17 pull requests across BrainzUtils and CritiqueBrainz. If I had more time, though I would have liked to work on my stretch goal of incorporating entity ratings from MusicBrainz into CritiqueBrainz. Although I did manage to open a BrainzUtils pull request for serializing the MusicBrainz ratings when fetching information, I did not get a chance to do anything with this data.

I’d like to thank the MetaBrainz Foundation for this amazing opportunity. Thanks to the team and thanks to Google, I was able to produce something that people everywhere will be able to use. I learned a lot about open source this summer, and I was able to polish up on my Python skills. I’m looking forward to continuing work on CritiqueBrainz and the continued support from the MetaBrainz team!

GSoC 2019: An open-source music recommendation engine

Give me music that I like.

When you start discovering yourself, just know that you are at the right place and with the right people.
MetaBrainz is the one for me!

I am Vansika Pareek (pristine__ on IRC), an undergraduate student at National Institute of Technology, Hamirpur, India. I have been working on the ListenBrainz-Labs project for MetaBrainz as a participant in Google Summer of Code ’19. The end of GSOC’19 is a beginning for me. Cheers!

How it all started?

Continue reading “GSoC 2019: An open-source music recommendation engine”

GSoC ’16 + ListenBrainz = fun :)

Hello,

I am Pinkesh Badjatiya and I have been working on ListenBrainz as part of GSoC ’16. I was largely involved in implementing the most requested features in ListenBrainz.
I began my journey with MetaBrainz not long before the Final Organization list was out. I started with MusicBrainz but moved quickly to ListenBrainz, and have been working on it since then.

About the project

The project consisted of creating a proxy scrobbling API similar to last.fm’s which could be used by existing desktop clients to submit listens to listenbrainz.org. I submitted my initial idea, that involved creating a new API along with few other optional features that were very much required (import, export, etc.).
The project made its way through the approval process, and I worked with ruaok (my mentor) & alastairp to get important things done. Yey!

Here are some of the snapshots of the my journey with ListenBrainz.

API_compat

ListenBrainz already had its own API which can be used to fetch/submit listens but all the existing clients that support scrobbling to last.fm use the ws.audioscrobbler.com’s API. To add support for these clients, I ended up creating a proxy API, api_compat (as in “compatible API”), that translates every request that is sent to “api.listenbrainz.org/2.0/” in the native format. This is an additional API which can be used along with the existing native ListenBrainz’s API.

This was largely the main goal of my project proposal. The instructions for scrobbling using Audacious are attached along with the source code.

Import lastfm-backup

The import page now allows users to import listens from the last.fm scrobbles or from the backup file which was downloaded from the older version of the last.fm website.
import_backup On successful import of listens from backup, you’ll get the following notification.import_success

Export listens

This allows users to export the listens from the listenbrainz.org website. This is useful for users who want to keep track of their listen history offline as well.
The export feature can be accessed from the drop-down menu.

export_dropdown_menuexport_page

Playing Now

With the support for API-Compat, the support for currently playing song was needed. This keeps the currently playing song on the website in sync with your favourite player.
playing_now

Import scraper uses audioscrobbler API

I also worked on updating the import scraper which now use the ws.audioscrobbler.com‘s API allowing users to import without opening their last.fm profiles. This also provides other useful track information to ListenBrainz.

Migrate to PostgreSQL

Another important change to ListenBrainz was how it stored listens. We moved from using Cassandra to PostgreSQL. Cassandra was fast and effective but getting more information other than the user’s listens (ex. generating statistics) was not possible. So we switched to Postgres + Redis. This opened more possibilities for future.

Experience

After 3.5 months, I ended up with 15 merged and 3 closed PR’s and a bunch of features for ListenBrainz that improved its look and feel.

My pull requests: https://github.com/metabrainz/listenbrainz-server/pulls?utf8=%E2%9C%93&q=is%3Apr%20author%3Apinkeshbadjatiya%20

I have worked on quite a lot of varied things in the past 4 months. A lot of them were actually not the part of the GSoC proposal but they were done largely in the same timeline or were optional targets, so I suppose they would count significantly towards GSoC.
I worked largely with alastairp, ruaok and Gentlecat. Gentlecat helped improve my coding style by providing feedback on my PR’s. I worked with alastairp and ruaok regarding the ideas/suggestions on how to address a problem and its possible solutions. It was a interesting experience working with the community and getting to know about MetaBrainz. Now that my understanding of the project and the community has increased, I look forward to making some great contributions!

Conclusion

In short, ListenBrainz went through a hell lot of changes in the past 4 months. If you were waiting for it to improve before using it, then now is the time that you should try it. I bet you’ll love its new look and you won’t be disappointed. 😀