Puppet is not the end of this journey. We can abstract even further if we begin to talk about pools of servers and virtual instances. What if we have a cluster of application nodes that need to be managed as groups or if we need reporting of Facter variables from all of the nodes that include a certain Puppet class? What do we do if Apache needs a kick on 25 instances out of 1000? MCollective can do these things and more.
MCollective uses a publish/subscribe message bus to distribute commands to systems in parallel. It’s used to push requests or commands out to all of your systems at once, allowing the MCollective server to decide which of the messages it should execute, based on a set of filters in the message. A good analogue of this is an IRC chat service. We can chat in a channel and receive all the messages, but messages that are intended for us will have our name attached to them.
The messages that an MCollective server consumes are then passed on to
agent modules that consume the message parameters and then do some work.
Agents exist for all sorts of behaviors, such as managing running services;
running Puppet; managing packages, processes, and files; and even banning IP
iptables. Beyond this, the agents are
fairly simple to write using SimpleRPC.
MCollective installation is not as simple as Puppet was. We need to set up a Stomp messaging server and configure the MCollective server on each of our hosts before we can start using it.
ActiveMQ is Apache’s Java messaging server. We’ll need to install
the Sun Java Runtime, get the ActiveMQ package, and configure it. If
you’re running Ubuntu, the package
be downloaded from the partner repository. You can download an ActiveMQ
tar from http://activemq.apache.org/activemq-542-release.html.
Once you have Java installed and the tarball extracted, you’ll need to edit the conf/activemq.xml file and add some authentication details to it. I’ll include an example below; the pertinent portions being the creation of an authorization user for MCollective and the MCollective topic. These are necessary to allow MCollective servers and client to talk to one another. You’ll need these credentials for your MCollective configuration as well:
<!---- SNIP -----> <plugins> <statisticsBrokerPlugin/> <simpleAuthenticationPlugin> <users> <authenticationUser username="mcollective" password="secrets" groups="mcollective,everyone"/> <authenticationUser username="admin" password="moresecrets" groups="mcollective,admin,everyone"/> </users> </simpleAuthenticationPlugin> <authorizationPlugin> <map> <authorizationMap> <authorizationEntries> <authorizationEntry queue=">" write="admins" read="admins" admin="admins" /> <authorizationEntry topic=">" write="admins" read="admins" admin="admins" /> <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" /> <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" /> <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/> </authorizationEntries> </authorizationMap> </map> </authorizationPlugin> </plugins> <!---- SNIP ----->
You can now start up ActiveMQ with the command
The MCollective “server” is the part that you’ll need to deploy on all of your nodes. The client is a sort of command console that sends messages to the servers. The installation of MCollective itself is fairly straightforward and has packages available for most distributions. You’ll need at least one client and one server installed in order to execute commands. Alternatively, there is a community Puppet module that can be used for installation of MCollective and distribution of the accompanying plug-ins:
MCollective downloads: http://www.puppetlabs.com/misc/download-options/
MCollective Puppet module: https://github.com/mikepea/puppet-module-mcollective
Once it’s installed, you’ll need to edit the /etc/mcollective/server.cfg and /etc/mcollective/client.cfg files, entering
the MCollective user’s password that you specified in the activemq
configuration in the
plugin.stomp.password field and specify your
Stomp hostname in the
plugin.stomp.host field. The
plugin.psk secret must match between the server and
client, as it is used for messaging encryption. This config assumes that
you have Puppet installed and looks for the class file at the default
location and sets the fact source to Facter:
# /etc/mcollective/server.cfg topicprefix = /topic/mcollective libdir = /usr/share/mcollective/plugins logfile = /var/log/mcollective.log loglevel = info daemonize = 1 # Plugins securityprovider = psk plugin.psk = mysharedsecret connector = stomp plugin.stomp.host = stomp.example.com plugin.stomp.port = 61613 plugin.stomp.user = mcollective plugin.stomp.password = secret # Facts factsource = facter # Puppet setup classesfile = /var/lib/puppet/state/classes.txt plugin.service.hasstatus = true plugin.service.hasrestart = true
In order for the Facter fact source to work correctly, you will need to distribute the Facter plug-in for MCollective to the servers. The plug-in source can be fetched from GitHub at https://github.com/puppetlabs/mcollective-plugins/tree/master/facts/facter/ and installed to the server under $libdir/mcollective. Remember to restart MCollective after copying the files so that MCollective will recognize the new agent.
You’ll need to install and configure the client in the same fashion. Here’s an example of the client configuration:
topicprefix = /topic/mcollective libdir = /usr/share/mcollective/plugins logfile = /dev/null loglevel = info # Plugins securityprovider = psk plugin.psk = mysharedsecret connector = stomp plugin.stomp.host = stomp.example.com plugin.stomp.port = 61613 plugin.stomp.user = mcollective plugin.stomp.password = secret
These configuration files contain secrets that can be used to publish commands onto the MCollective channel. The MCollective servers necessarily run as root and execute with full privileges. It is of utmost importance that access to the secrets and the Stomp server be carefully controlled.