[ale] Performance issue

Jeff Hubbs jhubbslist at att.net
Sun Aug 21 16:01:30 EDT 2016


Are you sure none of the drives are jumpered out-of-the-box for a lower 
speed? That happened to me a few years back; caught it before I sledded 
the drives.

On 8/21/16 11:06 AM, Jim Kinney wrote:
>
> Yep. 6Gbps is the interface. But even at a paltry 100Mbps actual IO to 
> the rust layer, the 12 disk raid 6 array should _easily_ be able to 
> hit 1Gbps of data IO plus control bits. The 38 disk array should hit 
> nearly 4Gbps.
>
> The drives are Toshiba, Seagate and HGST. They all are rated for rw in 
> the 230-260 MBps sustained (SATA can only do bursts at those rates) so 
> 1.8 Gbps actual data to the platters.
>
> I'm expecting a sustained 15Gbps on the smaller array and 48Gbps on 
> the larger. My hardware limits are at the PCIe bus. All interconnects 
> are rated for 24Gbps for each quad-channel connector. It really looks 
> like a kernel issue as there seems to be waits between rw ops.
>
> Yeah. I work in a currently non-standard Linux field. Except that 
> Linux _is_ what's always used in the HPC, big-data arena. Fun!  ;-)
>
> I don't buy brand name storage arrays due to budget. I've been able to 
> build out storage for under 50% of their cost (including my time) and 
> get matching performance (until now).
>
>
> On Aug 21, 2016 10:04 AM, "DJ-Pfulio" <DJPfulio at jdpfu.com 
> <mailto:DJPfulio at jdpfu.com>> wrote:
>
>     On 08/20/2016 10:00 PM, Jim Kinney wrote:
>     > 6Gbps SAS. 12 in one array and 38 in another. It should saturate
>     the bus.
>
>     6Gbps is the interface speed. No spinning disks can push that much
>     data
>     to my knowledge - even SAS - without SSD caching/hybrids. Even then,
>     2Gbps would be my highest guess at the real-world performance
>     (probably
>     much lower in reality).
>
>     http://www.tomsitpro.com/articles/best-enterprise-hard-drives,2-981.html
>     <http://www.tomsitpro.com/articles/best-enterprise-hard-drives,2-981.html>
>
>     You work in a highly specialized area, but most places would avoid
>     striping more than 8 devices for maintainability considerations.
>     Larger
>     stripes don't provide much more throughput and greatly increase issues
>     when something bad happens.  In most companies I've worked, 4 disk
>     stripes were used as the default since it provides 80% of the
>     theoretical performance gains that any striping can offer. That
>     was the
>     theory at the time.
>
>     Plus many non-cheap arrays will have RAM for caching which can limit
>     actual disks being touched. Since you didn't mention EMC/Netapp/HDS, I
>     assumed those weren't being used.
>
>     Of course, enterprise SSDs changed all this, but would be cost
>     prohibitive at the sizes you've described (for most projects).  I do
>     know a few companies which run all their internal VMs on RAID10
>     SSDs and
>     would never go back. They aren't doing "big data."
>
>     _______________________________________________
>     Ale mailing list
>     Ale at ale.org <mailto:Ale at ale.org>
>     http://mail.ale.org/mailman/listinfo/ale
>     <http://mail.ale.org/mailman/listinfo/ale>
>     See JOBS, ANNOUNCE and SCHOOLS lists at
>     http://mail.ale.org/mailman/listinfo
>     <http://mail.ale.org/mailman/listinfo>
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20160821/591691d4/attachment.html>


More information about the Ale mailing list