Puppet – Instalação e configuração do Mcollective, plugins e orquestração de clientes

Como instalar o Mcollective para fazer a orquestração de clientes.

1. Introdução

Para este tutorial você precisa ter Puppet Server funcionando.
Como instalar o Puppet Server e Puppet cliente

O puppet-agent é all-in-one, ele já contém o Mcollective, Hiera, Facter, e Puppet, sendo assim você não precisa instalar os pacotes mcollective-* como acontecia nas versões < Puppet 4.x.x. O Mcollective precisa de um serviço de mensageria, que é uma camada middleware para troca mensagens entre aplicações conhecido como “Broker”; neste caso vamos utilizar o serviço de mensageria Apache ActiveMQ, ou simplesmente ActiveMQ.
Também tem como opção o RabbitMQ, porém não será abordado neste tutorial.

2. Infraestrutura e pré-requisitos

Segue abaixo a infraestrutura utilizada neste lab, para que você utilize como exemplo.

Puppet Server: puppetserver-01.devopslab.com.br
Nodes/Clientes: puppetclient-01.devopslab.com.br, puppetclient-02.devopslab.com.br
S.O: Centos 7 – 64 bits instalação mínima.
Aplicações: puppetserver 2.2.1
puppet 4.3.2
puppet-agent 1.3.5

3. Instalação do ActiveMQ

No servidor do Puppet Server instale o repositório:

#rpm -hiv https://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm

Instale o ActiveMQ.

# yum install activemq

Configuração. Definição de usuário e senha.

# vi /etc/activemq/credentials.properties
activemq.username=mcollective
activemq.password=marionette

O usuário e senha pode ser a que você quiser ok, aqui vou seguir o padrão do mcollective.

Crie um nome para a fila de mensagens do ActiveMQ.
Neste caso criei como: ‘brokerName=”puppetserver-01.devopslab.com.br”‘.

# vi /etc/activemq/activemq.xml

<broker xmlns="http://activemq.apache.org/schema/core" brokerName="puppetserver-01.devopslab.com.br" dataDirectory="${activemq.data}"> 

Ainda no arquivo activemq.xml altere as configurações de memória e storage. O ActiveMQ vem configurado para utilizar 100GB de armazenamento, e para arquivos temporários ele pede 50GB de espaço, não convém utilizar tudo isto para um lab, então deixe mais ou menos assim:

# vi /etc/activemq/activemq.xml

<systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="40" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="1 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="500 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

Segue todo meu arquivo activemq.xml.

<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="puppetserver-01.devopslab.com.br" dataDirectory="${activemq.data}">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <persistenceAdapter>
            <kahaDB directory="${activemq.data}/kahadb"/>
        </persistenceAdapter>


          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before disabling caching and/or slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
          -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="40" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="1 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="500 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        </transportConnectors>

        <!-- destroy the spring context on shutdown to stop jetty -->
        <shutdownHooks>
            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
        </shutdownHooks>

    </broker>

    <!--
        Enable web consoles, REST and Ajax APIs and demos
        The web consoles requires by default login, you can disable this in the jetty.xml file

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->

Firewall
Libere a porta do ActiveMQ 61613/tcp no Firewall.

# firewall-cmd --permanent --add-port=61613/tcp
# systemctl reload firewalld

Inicie e habilite o ActiveMQ.

# systemctl enable activemq.service
# systemctl start activemq.service

Verificação dos logs do ActiveMQ.
# tail -f /var/log/activemq/activemq.log

2016-02-26 22:30:02,416 [main           ] INFO  XBeanBrokerFactory$1           - Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@36f6e879: startup date [Fri Feb 26 22:30:02 BRT 2016]; root of context hierarchy
2016-02-26 22:30:03,395 [main           ] INFO  PListStoreImpl                 - PListStore:[/usr/share/activemq/data/puppetserver-01.devopslab.com.br/tmp_storage] started
2016-02-26 22:30:03,435 [main           ] INFO  BrokerService                  - Using Persistence Adapter: KahaDBPersistenceAdapter[/usr/share/activemq/data/kahadb]
2016-02-26 22:30:03,767 [main           ] INFO  MessageDatabase                - KahaDB is version 5
2016-02-26 22:30:03,789 [main           ] INFO  MessageDatabase                - Recovering from the journal ...
2016-02-26 22:30:03,803 [main           ] INFO  MessageDatabase                - Recovery replayed 355 operations from the journal in 0.024 seconds.
2016-02-26 22:30:03,951 [main           ] INFO  BrokerService                  - Apache ActiveMQ 5.9.1 (puppetserver-01.devopslab.com.br, ID:puppetserver-01-39882-1456536603823-0:1) is starting
2016-02-26 22:30:04,021 [main           ] INFO  TransportServerThreadSupport   - Listening for connections at: tcp://puppetserver-01:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
2016-02-26 22:30:04,022 [main           ] INFO  TransportConnector             - Connector openwire started
2016-02-26 22:30:04,025 [main           ] INFO  TransportServerThreadSupport   - Listening for connections at: amqp://puppetserver-01:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
2016-02-26 22:30:04,026 [main           ] INFO  TransportConnector             - Connector amqp started
2016-02-26 22:30:04,029 [main           ] INFO  TransportServerThreadSupport   - Listening for connections at: stomp://puppetserver-01:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
2016-02-26 22:30:04,030 [main           ] INFO  TransportConnector             - Connector stomp started
2016-02-26 22:30:04,035 [main           ] INFO  TransportServerThreadSupport   - Listening for connections at: mqtt://puppetserver-01:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
2016-02-26 22:30:04,035 [main           ] INFO  TransportConnector             - Connector mqtt started
2016-02-26 22:30:04,130 [main           ] INFO  Server                         - jetty-7.6.9.v20130131
2016-02-26 22:30:04,160 [main           ] INFO  ContextHandler                 - started o.e.j.s.ServletContextHandler{/,null}
2016-02-26 22:30:04,191 [main           ] INFO  AbstractConnector              - Started SelectChannelConnector@0.0.0.0:61614
2016-02-26 22:30:04,191 [main           ] INFO  WSTransportServer              - Listening for connections at ws://puppetserver-01:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
2016-02-26 22:30:04,192 [main           ] INFO  TransportConnector             - Connector ws started
2016-02-26 22:30:04,193 [main           ] INFO  BrokerService                  - Apache ActiveMQ 5.9.1 (puppetserver-01.devopslab.com.br, ID:puppetserver-01-39882-1456536603823-0:1) started
2016-02-26 22:30:04,193 [main           ] INFO  BrokerService                  - For help or more information please see: http://activemq.apache.org
2016-02-26 22:30:04,350 [main           ] INFO  Server                         - jetty-7.6.9.v20130131
2016-02-26 22:30:04,578 [main           ] INFO  ContextHandler                 - started o.e.j.w.WebAppContext{/admin,file:/var/lib/activemq/webapps/admin/}
2016-02-26 22:30:04,710 [main           ] INFO  /admin                         - Initializing Spring FrameworkServlet 'dispatcher'
2016-02-26 22:30:04,761 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/createDestination.action] onto handler '/createDestination.action'
2016-02-26 22:30:04,762 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/deleteDestination.action] onto handler '/deleteDestination.action'
2016-02-26 22:30:04,762 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/createSubscriber.action] onto handler '/createSubscriber.action'
2016-02-26 22:30:04,762 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/deleteSubscriber.action] onto handler '/deleteSubscriber.action'
2016-02-26 22:30:04,762 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/sendMessage.action] onto handler '/sendMessage.action'
2016-02-26 22:30:04,762 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/purgeDestination.action] onto handler '/purgeDestination.action'
2016-02-26 22:30:04,763 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/deleteMessage.action] onto handler '/deleteMessage.action'
2016-02-26 22:30:04,763 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/copyMessage.action] onto handler '/copyMessage.action'
2016-02-26 22:30:04,763 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/moveMessage.action] onto handler '/moveMessage.action'
2016-02-26 22:30:04,763 [main           ] INFO  ndingBeanNameUrlHandlerMapping - Mapped URL path [/deleteJob.action] onto handler '/deleteJob.action'
2016-02-26 22:30:04,885 [main           ] INFO  WebAppContext                  - ActiveMQ Console at http://0.0.0.0:8161/admin
2016-02-26 22:30:04,893 [main           ] INFO  ContextHandler                 - started o.e.j.w.WebAppContext{/camel,file:/var/lib/activemq/webapps/camel}
2016-02-26 22:30:04,897 [main           ] INFO  WebAppContext                  - WebApp@5395829 at http://0.0.0.0:8161/camel
2016-02-26 22:30:04,905 [main           ] INFO  ContextHandler                 - started o.e.j.w.WebAppContext{/demo,file:/var/lib/activemq/webapps/demo}
2016-02-26 22:30:04,907 [main           ] INFO  WebAppContext                  - WebApp@5395829 at http://0.0.0.0:8161/demo
2016-02-26 22:30:04,914 [main           ] INFO  ContextHandler                 - started o.e.j.w.WebAppContext{/fileserver,file:/var/lib/activemq/webapps/fileserver}
2016-02-26 22:30:04,918 [main           ] INFO  WebAppContext                  - WebApp@5395829 at http://0.0.0.0:8161/fileserver
2016-02-26 22:30:04,922 [main           ] INFO  AbstractConnector              - Started SelectChannelConnector@0.0.0.0:8161

Veja nos logs que o ActiveMQ foi iniciado com sucesso “starded”.

Verifique o bind da porta 61613, tem que estar escutando.

# netstat -tnpau| grep 61613
tcp6       0      0  :::61613		:::*	LISTEN	5075/java

4. Configuração do Mcollective Server e Cliente

Para a configuração do Mcollective Server e client basta fazer os devidos ajustes nos arquivos client.cfg e server.cfg do Mcollective.

Fique atento as linhas:

plugin.activemq.pool.1.host = puppetserver-01.devopslab.com.br
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = marionette

# vi /etc/puppetlabs/mcollective/server.cfg

main_collective = mcollective
collectives = mcollective

libdir = /opt/puppetlabs/mcollective/plugins

# consult the "classic" libdirs too
libdir = /usr/share/mcollective/plugins
libdir = /usr/libexec/mcollective

logfile = /var/log/puppetlabs/mcollective.log
loglevel = info
daemonize = 1

# Plugins
securityprovider = psk
plugin.psk = unset

connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = puppetserver-01.devopslab.com.br
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = marionette
#plugin.activemq.pool.1.ssl = false

# Facts
factsource = yaml
plugin.yaml = /etc/puppetlabs/mcollective/facts.yaml

Configuração do Mcollective cliente.
# vi /etc/puppetlabs/mcollective/client.cfg

main_collective = mcollective
collectives = mcollective

libdir = /opt/puppetlabs/mcollective/plugins

# consult the "classic" libdirs too
libdir = /usr/share/mcollective/plugins
libdir = /usr/libexec/mcollective

logger_type = console
loglevel = warn

# Plugins
securityprovider = psk
plugin.psk = unset

connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = puppetserver-01.devopslab.com.br
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = marionette

connection_timeout = 3

Reinicie o Mcollective.

# systemctl start mcollective.service
# systemctl enable mcollective.service

O Log do Mcollective tem que ter uma saída parecida a esta:
#tail –f /var/log/puppetlabs/mcollective.log

^[[AI, [2016-02-26T22:47:36.675684 #5211]  INFO -- : config.rb:167:in `loadconfig' The Marionette Collective version 2.8.7 started by /opt/puppetlabs/puppet/bin/mcollectived using config file /etc/puppetlabs/mcollective/server.cfg
I, [2016-02-26T22:47:36.675875 #5211]  INFO -- : mcollectived:64:in `<main>' The Marionette Collective 2.8.7 started logging at info level
I, [2016-02-26T22:47:36.679816 #5218]  INFO -- : activemq.rb:211:in `initialize' ActiveMQ connector initialized.  Using stomp-gem 1.3.3
I, [2016-02-26T22:47:36.681132 #5218]  INFO -- : activemq.rb:313:in `connection_headers' Connecting without STOMP 1.1 heartbeats, if you are using ActiveMQ 5.8 or newer consider setting plugin.activemq.heartbeat_interval
I, [2016-02-26T22:47:36.681669 #5218]  INFO -- : activemq.rb:114:in `on_connecting' TCP Connection attempt 0 to stomp://mcollective@puppetserver-01.devopslab.com.br:61613
I, [2016-02-26T22:47:36.690413 #5218]  INFO -- : activemq.rb:119:in `on_connected' Connected to stomp://mcollective@puppetserver-01.devopslab.com.br:61613

Veja que o Mcollective se conecta no ActiveMQ.

4.1 Configuração dos nodes/clientes da Rede

Copie o arquivo “client.cfg” e “server.cfg” para a pasta “/etc/puppetlabs/mcollective” de todos os seus clientes e reinicie o mcollective.

# systemctl start mcollective.service
# systemctl enable mcollective.service

5. Teste de funcionamento do Mcollective

Configurado o Mcollective no servidor do Puppet Server e configurados todos os clientes da rede, faça o Login no servidor Puppet Server e teste o Mcollective com o comando mco.

[root@puppetserver-01 ~]# mco find
puppetserver-01
puppetclient-02
puppetclient-01

[root@puppetserver-01 ~]# mco ping 
puppetserver-01                          time=23.88 ms
puppetclient-02                          time=64.69 ms
puppetclient-01                          time=65.56 ms
---- ping statistics ----
3 replies max: 65.56 min: 23.88 avg: 51.37

Foram descobertos 3 clientes Mcollective. Este é o funcionamento básico do Mcollective, que é o teste de comunicação “ping”, porém isto ainda não é a orquestração, ou seja, ainda não é possível executar comandos nos hosts, gerenciar o puppet e etc, para isto necessita-se da instalação de plug-ins.

6. Instalação de Plugins Mcollective – Plugin Puppet

Existem vários plug-ins que controlam vários aspectos do sistema.
Consulte aqui os tipos de plug-ins do Mcollective:
https://docs.puppetlabs.com/mcollective/deploy/plugins.html#about-plugins–available-plugin-types

Existem duas formas de instalar os plug-ins do Mcollective.
A primeira é através dos repositórios da PuppetLabs, que é o que vamos utilizar aqui, simplesmente rodando o comando “yum install plugin***”.

A segunda forma é copiando os arquivos os arquivos de configurações e executáveis (.erb, .rb, .dll) para as pastas das bibliotecas do Mcollective. Esta é a forma mais difícil, porém alguns plugins só podem ser instalados desta forma.

Vamos começar instalando os plug-ins para o gerenciamento remoto do Puppet.

Instale o repositório abaixo tanto no Puppet Server como nos nodes/clientes.

# rpm -hiv https://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm

Puppet Server
No Puppet Server instale os Plugins: mcollective-puppet-client e mcollective-puppet-common.

[root@puppetserver-01 /]# yum install mcollective-puppet-client mcollective-puppet-common

No pacote “mcollective-puppet-client” entenda o “client” como se fosse a máquina que vai executar os comandos de orquestração, neste caso o Puppet Server vai executar a orquestração dos clientes.

Nodes/Clientes
Nos nodes/clientes instale os plug-ins: mcollective-puppet-agent e mcollective-puppet-common.

[root@puppetclient-01 /]# yum install mcollective-puppet-agent mcollective-puppet-common

No pacote “mcollective-puppet-agent” entenda o “agent” como sendo os hosts que vão receber os comandos de orquestração, sendo assim o “agent” vai ficar esperando os comandos de orquestração do cliente (Puppet Server).

Após ter instalado os plug-ins, reinicie o Mcollective.

# systemctl restart mcollective.service

6.1 Teste de Orquestração via Mcollective

Acabamos de configurar o Puppet Server e todos os clientes, sendo assim vamos testar, faça o login no Puppet Server para realizarmos alguns testes.

A. Consultar o status do Puppet nos clientes.

[root@puppetserver-01 ~]# mco puppet status -I puppetclient-01
 * [ ============================================================> ] 1 / 1
   puppetclient-01: Currently idling; last completed run 22 minutes 39 seconds ago
Summary of Applying:
   false = 1
Summary of Daemon Running:
   running = 1
Summary of Enabled:
   enabled = 1
Summary of Idling:
   true = 1
Summary of Status:
   idling = 1
Finished processing 1 / 1 hosts in 21.67 ms

B. Re-executar o Puppet em um cliente para que ele verifique o server por novas alterações.
Imagine que você criou uma nova configuração no Puppet Server, por exemplo a edição do arquivo de hosts dos clientes, e quer forçar a aplicação das alterações, para isto você vai executar um “runonce”. Veja:

[root@puppetserver-01 ~]# mco puppet runonce -I puppetclient-01 -I puppetclient-02
 * [ ============================================================> ] 2 / 2
Finished processing 2 / 2 hosts in 125.00 ms

Para saber todas as opções do comando “mco puppet” consulte o help.

# mco help puppet

Finalizamos a orquestração com Mcollective.
Para resumir o funcionamento do Mcollective você precisa ter instalado o ActiveMQ, configurado os arquivos server.cfg e client.cfg e instalação dos Plugins.

Atente-se para a diferença entre “mcollective-puppet-client” e “mcollective-puppet-agent”.

Conforme mencionado neste tutorial, existem vários plugins, por exemplo tem um Plugin que permite a execução de comandos via shell, muito legal este, pois é possível executar comandos do Linux via Mcollective.

Tem também um plugin do Facter, que permite consultar dados do sistema, fazer um inventário, como por exemplo verificar quantas CPUs, Memória, quais os IPs, interface de rede, versão de S.O, qualquer informação sobre o sistema operacional.

Leia: Tutorial Instalação e utilização do plugin Facter

Até a próxima.

Leonardo Macedo Cerqueira