MusicBrainz Schema change upgrade downtime: 17:00 UTC (10am PST, 1pm EST, 19:00CEST)

Hi!

At 17:00 UTC (10am PST, 1pm EST, 19:00CEST) we will start the process of our schema change release. The exact time that we plan to start the change will depend on how long it takes to finish our preparations, but we expect it to be shortly after 17:00UTC.

Once we start the process we will put a banner notification on musicbrainz.org and we will also post updates to the @MusicBrainz twitter account, so follow us there for more details.

After the release is complete, we will post instructions here on how to upgrade your replicated MusicBrainz instances.

Freedb gateway: End of life notice, March 18, 2019

Many moons ago people clamoured for a way for them to use MusicBrainz via their old FreeDB (and others) enabled players. The hope was that this would be a short term solution as more players picked up MusicBrainz support so we created the FreeDB gateway that allowed old clients to use an ancient interface to look up CDs with MusicBrainz.

We’ve been maintaining this gateway for over 11 years now and recently we had a user asking questions about their new music player and that they were having a hard time getting it to work with our FreeDB gateway.

Wait, what? Someone is developing specifically for a stop-gap measure? Clearly the goal of FreeDB gateway has been misconceived and people are not treating it as a gateway to using the proper MusicBrainz API endpoints.

We’re no longer keen on supporting this gateway and have been having trouble finding volunteers to maintain it. Our internal staff has enough to do with our own duties and have no interest in further maintaining this and neither do I.

For these reasons the FreeDB gateway is going away in 6 months time; March 18 will be the absolute last day that the gateway will be functioning. Should something crash and the gateway experience problems before then, we’ll just kill the VM that the gateway is running on and call it a day.

11 years of temporary is enough — if you use this service, migrate to a proper MusicBrainz endpoint right now!

AcousticBrainz migration: We’re on!

We had to postpone the migration of AcousticBrainz last week since we ran out of time (our database is getting to be sizable!). We’ve migrated the bulk of our data and are now ready to move the last bits and call the move complete.

Downtime will start very soon — follow us on twitter for more detailed updates.

AcousticBrainz downtime: Migrating hosting to our other servers

Today we’re going to migrate the AcousticBrainz service from its standalone server that we’ve rented in the past few years to our shared infrastructure at Hetzner. We’ve been prepping for this move for a few weeks now and the actual process to follow has been used before, so we don’t expect the downtime to be more than 1 hour.

We’re sorry for the downtime that will be coming — to keep up with what we’re doing, please follow our progress on Twitter. We hope to start the migration in the next hour or two from when this blog post goes up.

 

The end of the replication nightmare!

I’m pleased to report that our nightmare of finding/reconstructing the missing replication packets is finally over!

Through many heroic hours of work, Bitmap and Chirlu have reconstructed the missing replication packets. All clients should now be on their way to being up to date. We’ve learned a number of lessons (some good, some bad — that’s life, right?) in this ordeal and we hope to avoid these issues in the future.

An integral part of this recovery process were a number of people from our community who helped us: Users mbcz, rembo10 and xeam sent us their complete DB dumps! Bitmap used these to sanity check and diff several other database to finally extract the missing packets. Thank you for dropping what you were doing and sending us a few GB of data over blazingly fast connections. Without you this would not have been possible; and this is not an exaggeration. Thank you!

After some more rest we’re going to continue to put out smaller fires that remain from the move to NewHost, but for now, the big fires are put out. Just in time for the weekend!

In the 11 year history of the replication stream we’ve had to have users restart their stream about 3-4 times because of problems on our end. Zero would’ve been nicer, but I’m proud that we’ve been able to make this system work for so long. On a daily basis we seem to have about 400 replicated copies of MusicBrainz running all over the world. Clearly this part of our service is well used and I sleep a little better at night knowing that our most critical data is backed up across the globe.

Just for fun, here is a graph of the replication API usage over the last 6 months:

hourly-api-usage.png

Towards the end the graph shows the week plus long break, then a small blip as some of our replicas got unstuck yesterday and the much larger spike shows the rest of the replicas getting unstuck. Now, as to what caused the blip in mid-October — I have no idea.

Anyways, please accept my apologies for the replication stream outage and keep replicating!

Thanks!

Replication update: Do you have a DB that could help us?

In my post from yesterday I talked about our continuing struggle to fix our replication stream. Overnight we learned two new things:

  1. The lengthy hard drive recovery process has failed and yielded no useful results. 😦
  2. We have a working DB diff program in place that allows us to create missing replication packets from two DBs at known packet numbers.

#2 is a great step forward in fixing our replication stream, but we’re missing a specific replicated database at a specific point in time. If you have a replicated database, please read on and see if you can help us:

  1. Is your database at replication packet 99847? The way to find out your current replication sequence is to look in slave.log in your server directory or to issue the query
    select current_replication_sequence from replication_control;

    at the SQL prompt.

  2. Do you have a complete replicated MusicBrainz database, including the cover_art_archive schema?
  3. Are you willing to make a dump of the DB and send it to us?
  4. Do you have a fast internet connection to make #3 possible?

If you’ve answered yes to all of the above, please send email to support at metabrainz dot org.

Thanks!