[ale] Fault Tolerant High Read Rate System Configuration

Jim Kinney jim.kinney at gmail.com
Tue Jul 28 14:19:58 EDT 2009


There are some rather fast pci bus config around but you need serious
mobo data to know for sure. Most home-user pc's are crap with even
multiple pcie sharing a common interconnect.

For numbers on device bandwidth, see here:
http://en.wikipedia.org/wiki/List_of_device_bandwidths

You don't want more devices on a card than the slot it's plugged into
can take :-)

On Tue, Jul 28, 2009 at 1:37 PM, Greg Clifton<gccfof5 at gmail.com> wrote:
> Hi Jim,
>
> You do mean PCIe these days, don't you? Being serial point to point data
> xfer resolves the bus contention issue, no? Ain't much in the way of
> multi-PCI bus mobos to be had any more as the migration to PCIe is in full
> swing. I expect PCI will be SO 20th century by Q1 '10.
>
> What about a single 12, 16, or 24 drive RAID controller from 3Ware or Areca
> (PCIe x8 native, I believe for both now). I'm sure it is much greater than
> PCI (even PCIX @ 133MHz ~ 800mb/s), but what is the bandwidth on PCIe
> anyways?
>
> You are basically talking RAID 10 type configuration, no? Using the entire
> drive vs. short stroking so no complications in prepping a replacement
> drive, good thought.
>
> As Richard suggested, customer is interested in some sort of mirrored/load
> balanced/failover setup with 2 systems (if it fits the budget). How to, is
> where I am mostly clueless.
>
> Thanks,
> Greg
>
> On Tue, Jul 28, 2009 at 12:24 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
>>
>> multi-pci bus (not just multi pci _slot_) mobo with several add-on
>> SATA300 cards. Hang fast drives from each card matching the aggregate
>> drive throughput to the bandwidth of the pci bus slot. Make pairs of
>> drives on different cards be mirrors. Join all mirror pairs into a
>> stripped array for speed.
>>
>> Use entire drive for each mirror slice so any failure is just a drive
>> replacement. Add extra cooling for the drives.
>>
>> On Tue, Jul 28, 2009 at 11:35 AM, Greg Clifton<gccfof5 at gmail.com> wrote:
>> > Hi Guys,
>> >
>> > I am working on a quote for a board of realtors customer who has ~ 6000
>> > people hitting his database, presumably daily per the info I pasted
>> > below.
>> > He wants fast reads and maximum up time, perhaps mirrored systems. So I
>> > though I would pick you smart guys brains for any suggestions as to the
>> > most
>> > reliable/economical means of achieving his goals. He is thinking in
>> > terms of
>> > some sort of mirror of iSCSI SAN systems.
>> >
>> > Currently we are only using 50G of drive space, I do not see going above
>> > 500G for many years to come. What we need to do is to maximize IO
>> > throughput, primarily read access (95% read, 5% write). We have over
>> > 6,000
>> > people continually accessing 1,132,829 Million (as of today) small (<1M)
>> > files.
>> >
>> > Tkx,
>> > Greg Clifton
>> > Sr. Sales Engineer
>> > CCSI.us
>> > 770-491-1131 x 302
>> >
>> >
>> > _______________________________________________
>> > Ale mailing list
>> > Ale at ale.org
>> > http://mail.ale.org/mailman/listinfo/ale
>> >
>> >
>>
>>
>>
>> --
>> --
>> James P. Kinney III
>> Actively in pursuit of Life, Liberty and Happiness
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
>
>



-- 
-- 
James P. Kinney III
Actively in pursuit of Life, Liberty and Happiness


More information about the Ale mailing list