[ale] Building a Linux/Mysql Database server.

Damon Chesser dchesser at acsi2000.com
Tue May 24 12:33:12 EDT 2011



From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jim Kinney
Sent: Tuesday, May 24, 2011 12:13 PM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] Building a Linux/Mysql Database server.


On Tue, May 24, 2011 at 10:42 AM, The Don Lachlan <ale-at-ale.org<http://ale-at-ale.org>@unpopularminds.org<http://unpopularminds.org>> wrote:
On Tue, May 24, 2011 at 07:37:16AM -0400, LinuxGnome wrote:
> I would go with many 15k drives for the data (for example, if you have a 500G DB, then 9x 73G drives in RAID5 will you 6~8 times the performance of one 600G drive that is IO bound).  You might consider a ramdrive (i.e. FlashCache, not SDD) for writing your logs to.
Do NOT use RAID5. Ever. For ANYTHING. Unless you're trying to show exactly
how much it sucks in comparison to RAID10.

http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

If you are absolutely wedded to parity, use RAID6. But really, you should
still use RAID10.

http://storagemojo.com/2010/02/27/does-raid-6-stops-working-in-2019/

The math is already out there. In RAID5, you can only survive one disk
failure - two concurrent disk failures and you're dead. RAID10 may fail
after two concurrent disks failures but it can survive up to n/2 disk
failures, depending which disks fail, and the odds are higher that drives in
different mirrors will fail rather than in the same mirror.

Then, you want to rebuild. In RAID5, you have to read from n-1 disks to
rebuild, whereas RAID10 requires you to read 1 disk. As disks increase in
size, the probability of a read error during a rebuild approaches 100%,
which is more likely under RAID5 than RAID10 because you're reading more
disks and more data. Read error == failed rebuild. Some current drives are
large enough that it is a certainty in a RAID5 rebuild.

Do. NOT. Use. RAID5.

+100

RAID5 was designed for when hard drives were REALLY FREAKING EXPENSIVE. RAID5 needs to die. RAID6 needs to join it.

If you're really data paranoid, triple mirror RAID1 with RAID0 stripe for speed. Makes the rebuilds fly from a user perspective.
6-drive RAID10 SAS will saturate a single PCIe x8 bus on a read :-) So build 2 sets (12 drives) and use another card :-D

This does not make sense to me?  6 drives will overwrite your pipe on a PCIe X8 bus so you build two of them?  How does that mitigate the issue, you now have two 6-drive raid1 with raid0.  Seems you should now have the capacity to overrun two PCIe x8 bus I/O through puts?

Do you mean to make a raid1 out of two drives on CTRL0 and raid0 them with a raid1 on CTRL1?



-L
_______________________________________________
Ale mailing list
Ale at ale.org<mailto:Ale at ale.org>
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo



--
--
James P. Kinney III

As long as the general population is passive, apathetic, diverted to consumerism or hatred of the vulnerable, then the powerful can do as they please, and those who survive will be left to contemplate the outcome.
- 2011 Noam Chomsky

Damon Chesser
dchesser at acsi2000.com
damon at damtek.com


________________________________
Disclaimer: This electronic transmission and any attachments contain confidential information belonging to the sender. This information may be legally protected. The information is intended only for the use of the individual or entity named above. If you are not the intended recipient or receive this message in error, you are hereby notified that any disclosure, copying, distribution or taking of any action in reliance on or regarding the contents of this information is strictly prohibited. Please notify the sender immediately if you have received this information in error.

www.acsi2000.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20110524/4bcb89bd/attachment-0001.html 


More information about the Ale mailing list