Why Can't I find my databases from Mysql on linux?

Redundant MySQL Redundancy

  • What's the best way to ensure 99.999% service of a MySQL server? I'm looking for information on best practices for redundancy, and what the best way to acheive hot-failover would be ... or if it's better to use a cluster. Here's the technical details: I have two database servers purchased, but not yet set up. The main one will run PERC3 hardware raid 0+1 across four 18gb drives to form two mirrored 36 gb drives. The other one will run linux software raid1 across two 36gb drives. Most of the activity will be read, there will be minimal write. All drives are in hot swap trays, I have spare drives ready to go, and the colo is less than ten minutes from my home (...since hard drives always fail at 3 am). Hopefully, this will ensure that I can not be taken down by a single drive failure. The main machine has a dual power supply, with the backup only having a single. (I'm working with a very limited number of rack units.) Both machines are dual processor, dual NIC and will be running CentOS. Neither machine will be internet accessible, but will instead be connected to a private subnet accessible only to the web servers. Both machines will be connected to a private UPS that will be in the rack, and the main machine will also be connected to the colo power. This should keep the databases from getting knocked offline like Wikipedias and Livejournals did, even if it means that power will fail to all but the firewall, one web server, and the databases. I'm not sure how to configure MySQL for this duty, though. I'm used to using MyISAM tables for maximum speed, and I've only configured MySQL to run on the same server as web server. I could use hints for how to build maximum 'if shit happens' into the system. I do know that there is a clustering solution, but I haven't read too deeply into it. My questions are: 1. Am I configuring the hardware for maximum redundancy, speed, and reliability? Ideally, I'd be able to use hardware that had more redundancy built in, but I'm limited in my hardware selection by available rack space. 2. Is it possible to hot-failover a MySQL database using binary logging or some such, so that when one fails, the slave automatically assumes its identity and starts serving queries? I did google, but all efforts turned up the Cluster ... which I'm not sure about because it seems to rely on one server to be the cluster controller. (Insert joke about cluster-!@#$ here.) 3. Either way, is that a better solution or is clustering a better solution? Would any additional hardware (i.e. a cluster controller) be required? What happens if the cluster controller goes down? 4. What table type choice is the best? What will I need to know about configuring these servers to be hardy but fast draft horses rather than the thoroughbred racers that I'm used to creating? 5. What kind of network traffic will communication between the cluster or slave/backup require? Should I have a separate network for this traffic? My general intent is to provide maximum reliability and 'soft tiered failures' ... if something breaks, it goes to the next level and screams for help. If THAT breaks, then it goes to the next level and screams louder. If THAT breaks, it's still not the end of the world, because there's one more layer ... but if that last layer breaks, well, god save the SLA. Additional information: All servers in the rack are CentOS, most servers have 2 10/100, I haven't purchased the switches yet, so they can be managed if it will be best, 3 webservers will be feeding off of the two database machines, I'm not too concerned about processors overloading or failing but am more concerned about hard drives, and I can't add additional or larger hardware. PHP and Perl will be the methods of access. Thanks in advance for your answers. Horror stories are welcome.

  • Answer:

    "hardware raid 0+1" Careful; you want RAID 1+0 for redundancy -- i.e. you want to mirror, then stripe, not the other way around. The reasons are fairly obvious when you think about it -- when a striped drive fails, you lose everything in that set, but when a mirrored drive fails, you only end up with a degraded but functional mirror. http://www.pcguide.com/ref/hdd/perf/raid/levels/multXY-c.html. Also, make sure you have a spare card and a system you can put it in; software RAID's easy enough to access on another system if a SCSI controller dies, a RAID controller is rather less easy to replace. "Most of the activity will be read, there will be minimal write." Then you're in luck; MySQL supports master-slave replication which is good for scaling reads. A bigger issue is whether you need all that uptime on writes too; if you can live with not being able to write while your master server is down, you can keep reading from the slaves without it. Other forms of replication like master-master are possible, but you run into problems with database consistancy (what happens if they lose sync; which one's right?) and things like auto_increment don't behave properly. You can also make a master-slave failover approach where a slave takes over if the master dies, but I'm not aware of any out-of-the-box solutions for this -- once you understand the issues (and you must if you really want 5+ nines) you can teach your applications how to do it themselves. One issue with slaves is they tend to get behind if they're too loaded to properly service writes (either from locks in the case of MyISAM, or just plain old IO/CPU contention). You'll need to teach your readers how to detect this (a simple SHOW SLAVE STATUS check can be enough) and deal with it. Using memcached as a write-through cache can help, since you reduce the need to read from a slave immediately after a write to the master -- memcached is naturally fail-soft and distributed, so is a good choice if you know how to use it. LiveJournal's code is available, and they're heavy users of master-master replication and memcached; take a look for yourself :) "all efforts turned up the Cluster ... which I'm not sure about because it seems to rely on one server to be the cluster controller" As I understand it, the cluster controller doesn't need to be running all the time, it's just a good idea. I'd be more worried about how mature it is; I haven't done much research on it tbh, since its requirements exceed our available hardware for the size of our database. "4. What table type choice is the best?" It depends. InnoDB is supposed to be better at recovering from failure (although this depends on how well behaved your hardware is when it comes to things like fsync()), and it's a better choice if you've got long-running readers which will block updates/inserts and drag your slaves behind. "5. What kind of network traffic will communication between the cluster or slave/backup require?" If you're not doing many writes, then not much. Your readers probably won't come close to maxing it out either; the main issue is when it comes to adding a new slave -- copying a large database across Fast Ethernet can be painful. Again, this is a question you should answer yourself by doing tests with your particular workload; ours is write-heavy (on the order of a few hundred inserts/updates per second) and network's the least of our problems. "I haven't purchased the switches yet, so they can be managed if it will be best" Uhm.. duh? At the very least you want to be able to see and resolve things like duplex mismatches. Maybe you can do without it, but if you have SLA's, well... why are you even needing to ask? Aren't you supposed to get experience first.. then start making promises, not the other way around? :o Get a terminal server too -- I've lost count of the number of times I've repaired a server using serial console and saved a 3 hour drive to the data centre. When your RAID controller/driver buggers up and spews junk to the console, you don't want to have to drive in and plug in a monitor to see it. Ditto for power bars; unless you're running Suns or other servers with LOM (Lights Out Management; basically lets you turn a system on/off remotely) you're going to want a system you can SSH in to and powercycle systems when they lock up. I could go into more detail, but the opportunity cost is cooking some much needed food, and you're not paying me enough to overcome that right now ;)

SpecialK at Ask.Metafilter.Com Visit the source

Was this solution helpful to you?

Other answers

Oh yeah, and I asked this http://ask.metafilter.com/mefi/14041, so it's kind of a follow-up, but I've since learned more and actually gotten the hardware, so I deemed it worthy of a slight repost.

SpecialK

Some random advice: 1) Make sure to plug one of your dual power supplies into a plug on one circuit, and plug the other into another circuit. That one, one circuit breaker could go and you'd still be OK. Double check this. Get wiring diagrams, etc, etc. 2) The term you're looking for is "database replication" -- do your Google searches on that. 3) Database replication is very, very hard. I'm not sure if or how MySQL supports database replication. There are a couple of decent advanced MySQL books out there. 4) I'm not entirely sure 5 or 6 nines are possible with MySQL, because of its internal features. It lacks a transaction log, which would enable relatively quick recovery times. Some databases are also set up to survive an unplugging better. This is why people use things like Oracle, SQL Server, and Sybase. There are features inside the database that increase uptime. 5) Usually, the physical design of the *tables* doesn't affect uptime, it only really affects performance. The only real influence your logical and physical design has on your uptime is the size of the overall database. To shrink your size, go third-normal.

maschnitz

Database replication is very, very hard. I'm not sure if or how MySQL supports database replication. There are a couple of decent advanced MySQL books out there. It's pretty easy to set up MySQL db replication, although it helps to have it up and running before the db becomes anywhere near "mission critical". The http://dev.mysql.com/doc/mysql/en/replication.html is pretty helpful.

John Shaft

Freaky: Re - RAID level: Durr. See? This is why I ask. Thank you for all of your information and expertise. As far as making promises -- I'm still figuring out if I can make the promises. The project spec phase has been completed, but the contract has not been agreed on yet. SLA is one of those things that is still up in the air. To be quite honest, I'm a hack... -espeically- with networking. But I can learn if kicked in the right direction. Thanks.

SpecialK

I should clarify: doing database replication with transactions is very, very hard, especially in a multi-phase commit environment. If you don't care about transactional semantics, it's actually pretty easy.

maschnitz

maschintz - Actually, I don't particularily care about transactions. I just care about reliability and speed. Later on, I might care about transactions, and if I do I'll build a system that works a different way.

SpecialK

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.