Server update, 2016-08-29

This is another small release containing URL relationship fixes, so I’ll let the changelog speak for itself. 🙂 Thanks again to yvanz and chirlu for their contributions. The git tag is v-2016-08-29.

Bug

  • [MBS-8793] – Query string is not removed from Twitter links
  • [MBS-9048] – Untouched obsolete URL relationships trigger errors in the editing process
  • [MBS-9057] – ISE for artist with PayPalMe URL

Server update, 2016-07-04

This is a small bug-fix release. Server development has slowed down while we work on writing Docker containers for our new hosting infrastructure. But code contributions are always welcome. Thanks to Ulrich Klauer (chirlu) for his work on MBS-8806 this release.

The git tag is v-2016-07-04.

Bug

  • [MBS-8806] – Artist credits disappearing from tracklist & recordings after editing
  • [MBS-8987] – Untouched URLs are automatically/suddenly cleaned up
  • [MBS-8996] – Error message when trying to merge releases displays as “ARRAY(stuff)”
  • [MBS-8997] – Merging releases with “(Disc 1)”, “(Disc 2)” etc in titles selects all as disc 1
  • [MBS-8999] – Track lengths are in the wrong table column in the release editor

New Feature

  • [MBS-8994] – Allow passing &dismax=true through to the search server so Picard can show the same results as on the website

Massive connectivity issues

As you are probably aware, we’ve been having lots of network connectivity issues with all services hosted at Digital West in California (all of our projects, except ListenBrainz and AcousticBrainz).

Today we spent all morning trying to replace what we thought to be a faulty switch. That process didn’t go very well at all – we hit every conceivable issue that we could’ve hit. And a few more.

But, in this process we connected our gateway machines directly to our uplink (not through our switch) and the network issues persisted! After testing this setup with both of our machines, we’ve now conclusively eliminated all of our equipment as the possible source of trouble.

At this point our troubles lie in the hands of Digital West to fix. Thankfully the day staff will return to work in a few hours and hopefully we will make some progress on this issue then.

Sorry for all of this hassle. 😦

Updated search jar/war files

Given the utter slackers we are, we haven’t yet finished updating the search server to output the new MBIDs that were added to some entities in our last release. We’ll try and get that done soonish.

However, we did update the search code to fix this error in the search indexer:

ERROR: type “earth” does not exist

I’ve put both of these jar/war files on our FTP site:

If you would like to try and build these from source, you’ll need commit 4f677727 from mmd-schema and the latest master commit from search-server. For instructions on how to build this, please follow these instructions.

UPDATE: The build from the current master for search-server appears to not be able to load indexes upon startup. Please use the old war (we still use this in production) until we can release a fix.

Schema change update

We’ve finally completed the schema update and things are returning to normal. We need to get a new data dump out and then we will provide upgrade instructions tomorrow. As you might be able to guess, unless you are already on Postgres 9.5, we are going to recommend a clean data import, rather than a migration, if you have a replicated slave.

And, if anyone even dare ask (within the next week) when an updated VM will be released, you owe the whole development team each 2 bars of high quality chocolate.

Feeling lucky, punk?

P.S. Can you tell we’ve been up too long? 🙂

Server capacity update

Zas and I have been working hard to improve the capacity and stability of the site. In the last week, we’ve identified and fixed at least 3 problems with the search servers and we’ve added a timeout function that times out queries that take longer than 3 seconds. We think that the main cause of trouble was that queries were piling up after a slow query ran too long and that the servers never recovered from that and consequently crashed.

We won’t go as far as saying that the search servers are fixed — every time we have a smidgen of hope that things are improving, they crash again. Seemingly out of spite! So, the search servers are better. 😉

Zas has also made a number of changes to the gateways and how we rate limit our incoming traffic. The rate limiting is now being done in a smarter way that reduces the overall traffic on our web servers. Well done!

We’ve also increased our bandwidth budget by 4mbits per second, which makes the site feel considerably more responsive.

Let me put these improvement into numbers: About a week ago were were struggling to keep up 250 requests per second and the site felt very sluggish. Now we can handle 500 requests a second and the site feels considerably faster. For large chunks of the day we are managing to handle all the traffic we should handle. And, the search servers haven’t crashed in 4 days!

We hope that this will give us a solid base from which to release the scheme upgrade tomorrow. Then once that is complete, we will start work on moving to the new hosting company.

Thanks for being patient with us!