[ale] ssh for automated management

Mike Murphy mike at tyderia.net
Fri Dec 17 17:51:48 EST 2004


ah. I see. There are lots of good ways to do that, but the simplest is 
to use rsync instead. If you don't have need to execute commands, 
setting all the machines up to do an rsync pull at given intervals is 
the way to go.

You can control machines getting what when with a config file they pull 
first. That's the way a bunch of stuff I've done has done it at least.

If you are feeling really ambitious though, you could try using other 
tools, like something based on cfengine or apt/yum/some other package 
management tool too.

Pushing the files you want to publish with ssh/scp is actually probably 
the least efficient way to do it of the ways discussed.

Mike


David Corbin wrote:
> On Friday 17 December 2004 12:14, Mike Murphy wrote:
> 
>>depending on what sort of stuff you are doing, how big any stuff you are
>>pushing is, and how fast the network links are, 10000+ nodes is a lot of
>>nodes to admin, no matter what the technique (but if you have 10000+
>>nodes, I'm sure you know that).
>>
>>I guess its all a question of what exactly you're up to. If you were to
>>try to ssh to each machine in series to do something (say echo "some
>>param" into /etc/somefile), you might still be surprised by how much
>>time that takes. Certainly, if you are thinking of ssh to replace some
>>other terminal-like administration solution, like doing stuff in scripts
>>over rsh, or over telnet with expect or something, its probably worth
>>the extra overhead of encryption for added security though.
>>
>>I can tell you that I've found that even fully managing about 900 hosts
>>has brought up some interesting problems. In this example, we use rsync
>>to keep various configuration files, etc. in sync accross all the hosts.
>>Once an hour, they each visit a dedicated rsync server to look for
>>updates. Even using rsync with a server (instead of rsync over ssh),
>>which is very efficient, we're starting to find that we might want to
>>inject a second tier here. So, it looks like this:
>>
>>master server -> n number of "staging" servers -> x number of working hosts
>>
> 
> 
> Actually, not all nodes are created equal.  Each site has it's own server 
> node, so it's quite possible and in  some ways desirable to do a two-tier 
> system.
> 
> What I failed to mention, is this is really about managing software upgrades 
> automatically.  It's not something that will happen on an every day basis.  
> Short of serious problems, it would probably be once a moneth at most.  Even 
> then, it would likely be staged: 1 site for 2 days.  10 sites.  25 sites.  
> 100, 250   the rest.

-- 

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Mike Murphy
781 Inman Mews Drive Atlanta GA 30307
Landline: 404-653-1070
Mobile: 404-545-6234
Email: mike at tyderia.net
AIM: mmichael453
JDAM: 33:45:14.0584N  84:21:43.038W
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+



More information about the Ale mailing list