[ale] Small Clusters for VMs

Derek Atkins warlord at MIT.EDU
Fri Oct 28 22:27:39 EDT 2016


So here's a question:  have you tried running oVirt on a single machine
(sort of like the old vmware-server)?  I.e., a single machine that has
CPU and Disk, running a hypervisor, ovirt-engine, etc?

It seems silly to run NFS off a local disk just to get Self-Hosted oVirt
to work..  But of course they stopped supporting the "AllInOne" in
ovirt-4.0 and don't seem to support local storage for the
SelfHostedEngine.

Any Ideas?

Second question:  is there a web-ui password-change for the AAA-JDBC
plugin?  I.e., can users change their own passwords?

-derek

Jim Kinney <jim.kinney at gmail.com> writes:

> On Fri, 2016-10-28 at 10:49 -0400, DJ-Pfulio wrote:
>
>     Thanks for responding.
>     
>     Sheepdog is the storage backend. This is the way cloud stuff works on
>     the cheap. Not a NAS.  It is distributed storage with a minimal
>     redundancy set (I'm planning 3 copies).  Sheepdog only works with qemu
>     according to research, which is fine.
>     
>     Sure, I could setup a separate storage NAS (I'd use AoE for this), but
>     that isn't needed. I already have multiple NFS servers, but don't use
>     them for hosting VMs today. They are used for data volumes, not redundancy.
>     
>            >> Opinions follow (danger if you love what I don't) <<
>     
>     Won't be using oVirt (really RHEL only and seems to be 50+ different
>     F/LOSS projects in 500 different languages [I exaggerate] ) or XenServer
>     (bad taste after running it 4 yrs).  I've never regretted switching from
>     ESX/ESXi and Xen to KVM, not once.
>
> Ovirt is only 49 projects and 127 languages! Really!
>
> Ovirt is just the web gui front end (pile of java) with a mostly python
> backend that runs KVM and some custom daemons to keep track of what is running
> and where. It is most certainly geared towards RHEL/CentOS. That may be an
> irritant to some. I've found the tool chain to JustWork(tm). I need VMs to run
> with minimal effort on my part as I have no time to fight the complexity. I've
> hacked scripts to do coolness with KVM but found Ovirt did more than I could
> code up with the time I have. It really is a GPL replacement for VMWARE
> Vsphere.
>
>     And won't be dedicating the entire machines just to being storage or VM
>     hosts, so proxmox clusters aren't an option.  The migration from plain
>     VMs into sheepdog appears pretty straight forward (at least on youtube).
>
> One thing I like with Ovirt is I can install the host node code on a full
> CentOS install or use the hypervisor version and dedicate a node entirely.
> I've used both and found them to be well suited for keeping VMs running. If
> there is an issue with a node, I have a full toolchain to work with. I don't
> use the hypervisor in production.
>
> A major issue for my use is the need to have certain VM up and running at all
> times. Ovirt provides a process to migrate a VM to an alternate host if it
> (host or VM) goes down. The only "gotcha" of that is the migration hosts must
> provide the same cpu capabilities so no mixing of AMD and Intel without
> setting the VMs to be i686.
>
>     Just doing research today. Need to sleep on it. Probably won't try
>     anything until Sunday night.
>
> Download CentOS 7.2
> Install VM host version
> yum install epel-release
> Follow direction here: https://www.ovirt.org/release/4.0.4/
> starting with:
> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
>
> Be aware that when docs refer to NFS mounts, the server for that can be one of
> the nodes that has drive space. ISO space is where <duh> iso images are kept
> for installations. I have one win10 VM running now for a DBA with specialty
> tool needs.
>
>     On 10/28/2016 10:23 AM, Beddingfield, Allen wrote:
>         
>         Will you have shared storage available (shared LUN or high performance NFS for the virtual hard drives that all hosts can access?)
>         If so, the easiest free out of the box setup is XenServer or oVirt.  I'm familiar with XenServer, but there are some oVirt fans on here, I know.
>         
>         --
>         Allen Beddingfield
>         Systems Engineer
>         Office of Information Technology
>         The University of Alabama
>         Office 205-348-2251
>         allen at ua.edu
>         
>         On 10/28/16, 9:17 AM, "ale-bounces at ale.org on behalf of DJ-Pfulio" <ale-bounces at ale.org on behalf of DJPfulio at jdpfu.com> wrote:
>         
>             I'm a little behind the times.  Looking to run a small cluster of VM
>             hosts, just 2-5 physical nodes.
>             
>             Reading implies it is pretty easy with 2-5 nodes using a mix of
>             sheepdog, corosync and pacemaker running on qemu-kvm VM hosts.
>             
>             Is that true?  Any advice from people who've done this already?
>             
>             So, is this where you'd start for small home/biz redundant VM cluster?
>             
>             I've never done clustering on Linux, just Unix with those expensive
>             commercial tools and that was many years ago.
>
>             In related news - Fry's has a Core i3-6100 CPU for $88 today with their
>             emailed codes.  That CPU is almost 2x faster than a first gen Core
>             i5-750 desktop CPU. Clustering for data redundancy at home really is
>             possible with just 2 desktop systems these days. This can be used with
>             or without RAID (any sort).
>             
>             _______________________________________________
>             Ale mailing list
>             Ale at ale.org
>             http://mail.ale.org/mailman/listinfo/ale
>             See JOBS, ANNOUNCE and SCHOOLS lists at
>             http://mail.ale.org/mailman/listinfo
>
>         _______________________________________________
>         Ale mailing list
>         Ale at ale.org
>         http://mail.ale.org/mailman/listinfo/ale
>         See JOBS, ANNOUNCE and SCHOOLS lists at
>         http://mail.ale.org/mailman/listinfo

-- 
       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL: http://web.mit.edu/warlord/    PP-ASEL-IA     N1NWH
       warlord at MIT.EDU                        PGP key available


More information about the Ale mailing list