[ale] Example large generic website for load testing?

Richard Bronosky richard at bronosky.com
Thu Sep 1 14:56:24 EDT 2016


You say you want a "large generic website" for testing. But, that alone has
no value unless it is similar to the real site you will be running. What is
the real site you need to run made of? Meaning: Is it static HTML files or
is it computed in real time? If it's computed in real time, what language
(Java, Python, PHP, Perl, .NET, etc.) and what database (if any) is it
using.

In this community we have a habit of answering the question asked without
questioning the validity of what was asked. I'm just trying to save you
from wasting your time.



.!# Bruno #!.

On Thu, Sep 1, 2016 at 2:10 PM, Beddingfield, Allen <allen at ua.edu> wrote:

> Actually, I'm not trying to work around potential SAN issues so much.  Our
> team manages the SAN, and we have a huge investment in it (Compellent well
> in excess of a Petabyte here, and another one slightly less large in
> Atlanta).  Other than a couple of minor issues, it has been solid.
> However, the requirements for this setup (we call this "core web services")
> are for it to be able to function given a failure of any or all of the
> following:  vSPhere, SAN, storage network.
>
> --
> Allen Beddingfield
> Systems Engineer
> Office of Information Technology
> The University of Alabama
> Office 205-348-2251
> allen at ua.edu
>
>
> On 9/1/16, 11:32 AM, "ale-bounces at ale.org on behalf of DJ-Pfulio" <
> ale-bounces at ale.org on behalf of DJPfulio at jdpfu.com> wrote:
>
>
>
>     SANs are a blessing and a curse.
>
>     When they work, they work very well, but in highly dynamic environments
>     with lots of new/old systems being swapped, crap happens and there is
>     always the firmware upgrade issue with SANs and SAN storage.  With
>     hundreds of servers connected, sometimes it just isn't possible to
>     upgrade the firmware (anywhere in the chain) due to incompatibilities
>     and a fork-lift upgrade is the only way. Newer systems work with newer
>     SAN storage and older systems, which cannot be touched, stay on the old
>     storage as long as possible. Of course, this brings in power, cooling,
>     and space issues for most DCs. Can't just leave old stuff in there.
>     Virtualization helps greatly with this stuff, unless each VM is
> directly
>     attaching to storage.
>
>     If Allen isn't in control of the SAN, I can see why he'd shy away from
>     using it. Especially if that team hasn't been great support. Not saying
>     that is the situation and my systems have been extremely lucky with
>     fantastic SAN/Storage team support over the years (except once, during
> a
>     nasty daytime outage).
>
>     None of this probably matters to Allen.
>
>     >
>     >
>     > -----Original Message----- From: ale-bounces at ale.org
>     > [mailto:ale-bounces at ale.org] On Behalf Of Beddingfield, Allen Sent:
>     > Thursday, September 01, 2016 9:45 AM To: Atlanta Linux Enthusiasts
>     > Subject: Re: [ale] Example large generic website for load testing?
>     >
>     > In this case for this test, the ONLY thing I care about is disk i/o
>     > performance.  Here's why: We currently have a setup where multiple
>     > physical web servers behind an A10 load balancer are SAN attached
>     > and sharing  an OCFS2 filesystem on the SAN for the Apache data
>     > directory.  This houses sites that administration has determined to
>     > be mission-critical in the event of an emergency/disaster/loss of
>     > either datacenter. I'm wanting to replace that with VMs mounting an
>     > NFS share across a 10GB connection (also repurposing the old
>     > physicals as web servers), but I want to test the performance of it
>     > first.
>     >
>     > New requirements for this are: 1.  Must be available in the event of
>     > a SAN failure or storage network failure in either or both
>     > datacenters 2.  Cannot be fully dependent on the VMware vSphere
>     > environment 3.  Must be able to run from either datacenter
>     > independently of the other.
>     >
>     > So... 1 Physical host in each location for NFS storage - rsync+cron
>     > job to keep primary and standby datacenter in sync.
>     >
>     > A pair of standalone virtualization hosts in each location, running
>     > the VMs from local storage, and mounting the NFS shares from the
>     > server(s) above.
>     >
>     > Load balancer handling the failover between the two (we currently
>     > have this working with the existing servers, but it is configured by
>     > someone else, and pretty much a black box of magic from my
>     > perspective).
>     >
>     > Oh, there is a second clustered/load balanced setup for database
>     > high availability, if you were wondering about that...
>     >
>     > The rest of it is already proven to work - I am just a bit concerned
>     > about the performance of using NFS.  We've already built a mock-up
>     > of this whole setup with retired/repurposed servers, and if it works
>     > acceptably from a 6+ year old server, I know it will be fine when I
>     > order new hardware. --
>     _______________________________________________
>     Ale mailing list
>     Ale at ale.org
>     http://mail.ale.org/mailman/listinfo/ale
>     See JOBS, ANNOUNCE and SCHOOLS lists at
>     http://mail.ale.org/mailman/listinfo
>
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20160901/8b19c124/attachment.html>


More information about the Ale mailing list