[ale] Virtualization

Greg Freemyer greg.freemyer at gmail.com
Wed Feb 25 10:30:59 EST 2009


On Wed, Feb 25, 2009 at 7:48 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> On Tue, Feb 24, 2009 at 9:50 PM, Christopher Fowler
> <cfowler at outpostsentinel.com> wrote:
>
>> This is my experience as well.  I think many companies look at
>> virtualization as a way to save
>> money on hardware.
>
> Virtualization seems like a cool toy. But when I see a business use
> many virtualized machines for daily processes, mission critical
> services, etc, it just screams "single point of failure with massive
> consequences".
>
> It also speaks volumes about the overall architecture and design of
> the processes in use that they require multiple machines for load that
> then get virtualized to save money on hardware.
>
> ?!?!?!?!?
>
> Huh?!? WHA?!?!?
>
> Picture this scenario: Product FOO is composed of database, app logic,
> and UI frontend. The designers all insist that their portion requires
> an independent machine to avoid resource conflicts. So 3 VMs get built
> thus placing all the parts on the same machine with even higher
> overhead than if they were on a single, physical machine. Management
> viewpoint is they don't have a new chunk of hardware to buy for this
> process. While true, they did have to buy a HONKIN' box(s) for the vm
> server.

My own opinion is: Give me my own honking box and let me waste as many
resources as I want to.

== Logic
I support a relatively important project at a Fortune 50 company.

Our project is not CPU intensive, so we have three options.

option 1) Buy 3 physical servers (prod, QA, dev) and use them.  OS
admin handled by the IT team.  For prod and QA we are only allowed
full control if one of the official IT support team is allowing us to
"shadow" them.  ie. they run the keyboard, we stand over their
shoulder and tell them what to do.

Company IT handles all basic machine config, OS patches, backup setup,
etc. without us present.  (worrisome).

option 2) Same thing, but use 3 VMs.

option 3) Run in a shared J2EE environment.

I think many would argue that we should be in the shared J2EE
environment to save on equipment cost.  To me that is scary, we don't
want company IT to touch anymore of our infrastructure (J2EE stack)
then we absolutely have to.

Even with the VMs, we are concerned we loose reliability due to
Hypervisor issues.  ie. A corp. IT person puts in a hypervisor patch
and we die.  Thus we want as simple of a system as we can get.

So far, we have managed to keep our physical machines, even though we
only have about a 2% cpu load and only need a couple GB of RAM.
Unfortunately we have to do battle over this every few years and the
battle for our next platform migration is currently under way.

FYI: The fully allocated cost of the servers, support, etc is less
than 1% of the project's annual budget.

(ie. We are a cost center and are charged for all the time people
spend interacting with our system.  Thus if we ask 500 people to spend
an hour per day interacting with the project, we get charged for 500
man-hours per day.  Even at internal rates that is a lot of money and
it dwarfs the cost of serers.).

Fortuntely we save 10+ dollars for every dollar we spend, so the
server cost is less than .1% of annual project savings.

In the end up time is just too valuable to risk it on trying to
optimize hardware cost.  And that is my personal issue with VMs or
other shared solutions.

Greg
-- 
Greg Freemyer
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com



More information about the Ale mailing list