New VMWare player of MusicBrainz server available

After users reported a number of problems (lack of enough disk space, data import problems) Rod Begbie created a new MB Server VMWare image of the 2007-04-01 release for us. To download and play with the new image, read our VirtualMusicBrainzServer wiki page. Thanks Rod!

After users reported a number of problems (lack of enough disk space, data import problems) Rod Begbie created a new MB Server VMWare image of the 2007-04-01 release for us. To download and play with the new image, read our VirtualMusicBrainzServer wiki page.

Thanks Rod!

Search server down

The search server is terminally overloaded (load average over 100, last I looked), and it was taking down the main web site with it. Therefore for now I’ve had to change mb_server to show a simple error message instead of running the search. You may be better off using the old search page for now. … Continue reading “Search server down”

The search server is terminally overloaded (load average over 100, last I looked), and it was taking down the main web site with it. Therefore for now I’ve had to change mb_server to show a simple error message instead of running the search.

You may be better off using the old search page
for now.

Hopefully more news later.

Scheduled downtime: Tuesday, May 15th, 2000UTC

Tomorrow at 2000UTC (1300 PDT, 1500 EDT, 2100BST, 2200MET) we’re going to finally rotate the new Sun server into active service. We’re expecting MusicBrainz to be unavailable for about 90 minutes while we dump the database from our old server and import it to the new server. Sorry for the inconvenience this will cause.

Tomorrow at 2000UTC (1300 PDT, 1500 EDT, 2100BST, 2200MET) we’re going to finally rotate the new Sun server into active service. We’re expecting MusicBrainz to be unavailable for about 90 minutes while we dump the database from our old server and import it to the new server.

Sorry for the inconvenience this will cause.

Solaris help!

We’ve got the new database server finally ready to roll, except for one thing: We don’t know how to monitor the hardware RAID array. Under Linux we would use mpt-status, but this doesn’t work for Solaris. Does anyone know how to get Solaris to tell us about the state of the hardware RAID array? If … Continue reading “Solaris help!”

We’ve got the new database server finally ready to roll, except for one thing: We don’t know how to monitor the hardware RAID array.

Under Linux we would use mpt-status, but this doesn’t work for Solaris. Does anyone know how to get Solaris to tell us about the state of the hardware RAID array? If one of the drives in the array goes away, we want to know about it as soon as possible.

Any tips would be greatly appreciated!

UPDATE: Our very own inhouseuk had the answer: raidctl — a utility that was installed all along!

New VMWare Player image available

Rod put together the WMWare player image of the MusicBrainz server for the April 1 release. If you’re interested in hacking on MusicBrainz server code, you should start here: Download the April 1 server image Read the install instructions VirtualMusicBrainzServer Thanks for putting together the VMWare image, Rod!

Rod put together the WMWare player image of the MusicBrainz server for the April 1 release. If you’re interested in hacking on MusicBrainz server code, you should start here:

  1. Download the April 1 server image
  2. Read the install instructions VirtualMusicBrainzServer

Thanks for putting together the VMWare image, Rod!

General site update

Its been a rocky week in the MusicBrainz universe, that’s for sure! About three weeks ago the load on our database server started rising — most likely due to the fact that after the April 1 release people went nutz adding labels to the database and vastly more AR links than before. The onslaught of … Continue reading “General site update”

Its been a rocky week in the MusicBrainz universe, that’s for sure!

About three weeks ago the load on our database server started rising — most likely due to the fact that after the April 1 release people went nutz adding labels to the database and vastly more AR links than before. The onslaught of this extra data pushed our database over an invisible threshold and things started getting shaky.

In order for a database to be running smoothly and efficiently it should mostly fit into RAM and not require the database server to fetch much data from disk continually. Once the threshold is hit where data needs to be continually fetched from disk, everything slows down drastically.

That’s basically what happened three weeks ago — the database outgrew the server we have for it. At first we thought that some feature from the April 1 release was bogging down the server, so we did some triaging with no luck. Finally we decided to throw out nearly 1 GB of useless Add TRM edit data, which shrunk the database size back down to a manageable level.

This, of course, is nothing more than a band aid. In a few weeks this problem will be back. Anticipating this moment for over a year now, I’ve been pushing for a large server donation. The Sun server donation was supposed to perfectly take care of that. But we were having serious issues with getting the database to run well on the Sun box. But, with help of some Sun engineers we’ve gotten past this problem and are now in the final stages of preparing the Sun server for production use.

But, in this middle of all this more disaster struck. Lingling, which was taking over for Stimpy as our primary web server, had a power supply fail early in the morning this past Sunday. Stimpy, with redundant power supplies, was sitting idle waiting to be put back into service after he got a new motherboard from Dell. With all the other problems we didn’t have the time to switch Stimpy back in for Lingling — we had scheduled that to happen about 12 hours after Lingling failed.

Lingling has a new powersupply arriving tomorrow. Moose, the Sun server, may go into service this weekend or early next week. Once we get these two tasks done, the site performance should be back to being zippy.

Until then, I apologize for the inconvenience!

Classic Tagger degraded

All the usual tricks to massage our database server aren’t helping. ๐Ÿ™ The problem appears to be the ever growing table of TRMs that the Classic Tagger uses. The table has gotten too big with dead TRMs and its really hard to remove all the dead TRMs. So, in order to get things back to … Continue reading “Classic Tagger degraded”

All the usual tricks to massage our database server aren’t helping. ๐Ÿ™

The problem appears to be the ever growing table of TRMs that the Classic Tagger uses. The table has gotten too big with dead TRMs and its really hard to remove all the dead TRMs. So, in order to get things back to a sane state, we’ve disabled TRM functionality and our database is now working fairly well again.

Classic Tagger users: While we figure out how to proceed, the tagger will stop working. The classic tagger will not recognize any tracks right now — we apologize for the inconvenience. At this point, please check out the Picard and PicardQT as alternatives!

Stay tuned for more details!

Scheduled downtime: Friday 2000 UTC, 4pm EDT, 1pm PDT

We need to dump the database and re-import it in order to get off this crazy load spike the database has been on. We will be down for about 90 minutes starting today (Friday) at 2000 UTC, 4pm EDT, 1pm PDT. Sorry for the inconvenience and late notice! UPDATE: The import/export is taking longer than … Continue reading “Scheduled downtime: Friday 2000 UTC, 4pm EDT, 1pm PDT”

We need to dump the database and re-import it in order to get off this crazy load spike the database has been on. We will be down for about 90 minutes starting today (Friday) at 2000 UTC, 4pm EDT, 1pm PDT.

Sorry for the inconvenience and late notice!

UPDATE: The import/export is taking longer than we care for. We hope to be done soon. Sorry for the hassle.

Overloaded database server

The load on our database server has been growing over the last few days causing slow downs. Since our overall traffic is not going up right now, this suggests that our last update caused some performance issues. In order to analyze our traffic, I will be doing some performance tuning and query logging to attempt … Continue reading “Overloaded database server”

The load on our database server has been growing over the last few days causing slow downs. Since our overall traffic is not going up right now, this suggests that our last update caused some performance issues.

In order to analyze our traffic, I will be doing some performance tuning and query logging to attempt to get to the bottom of this problem. To that affect, MusicBrainz will have a couple of short downtimes today as I tinker with our database server.

Sorry for the inconvenience.