Setup guide for ActiveMQ as regional broker

From PDP/Grid Wiki
Revision as of 18:08, 23 March 2015 by (talk | contribs) (typos)
Jump to navigationJump to search

This guide is meant to be an aid in setting up an ActiveMQ messaging broker for the use of a regional messaging service. The setup only assumes a single node, if you wish to set up a network of brokers please refer to [1]. The first two parts cover the simple installation and configuration setup, while the last part is targeting a specialized use case in which a broker can be configured as a relaying service.


Since it's a java based service, first you need to make sure to have java installed. After getting the latest release for ActiveMQ [2][3] you can unpack it to /opt/.

tar xzf apache-activemq-5.11.1-bin.tar.gz
mv apache-activemq-5.11.1 /opt
ln -s /opt/apache-activemq-5.11.1/ /opt/activemq

You can start and stop the broker service with

/opt/activemq/bin/activemq [start|stop]

By default ActiveMQ comes with a sample configuration that defines a simple broker with some existing transport connectors (endpoints for clients, consumers and producers alike). You should be able to use these endpoints right away with any queue or topic, since by default ActiveMQ lets you define new queues/topics on the fly. This can be restricted by access control rules described later on.


The installation will leave you with an unconfigured broker. To configure the way your broker behaves you have to edit the /opt/activemq/config/activemq.xml file. Inside the file every change in the broker configuration has to be made inside the <broker> bean definition. Make sure to complete the definition of your broker bean with an 'id' (makes it easier to reference this bean later on) and a 'brokerName' which is usually the same as the hostname. The rest of the flags will be discussed later on.

<broker id="broker" xmlns="" brokerName="national-broker" dataDirectory="${}" useJmx="true" persistent="true">


To set up a ssl+stomp endpoint in the broker you have to make sure to define a transport connector inside the broker definition. This looks like the following:

   <transportConnector name="stomp+ssl" uri="stomp+ssl://"/>

Alternatively you can also define an unsecured endpoint with the 'stomp' keyword only. If you wish to use SSL you will also have to define the right sslContext for the broker [4]. This can be done by adding the following into your bean definition:

   <sslContext keyStore="file:${activemq.conf}/broker.ks" keyStorePassword="password"
               trustStore="file:${activemq.conf}/broker.ts" trustStorePassword="password" />

The keyStore and trustStore are two java stores that can be created with the 'keytool' utility. The keyStore contains the host certificate that is used to authenticate this broker, while the trustStore contains trusted user and/or CA certificates. If you only want to have server authentication, then defining a keyStore is sufficient, on the other hand if you wish to have client authentication as well you will need a trustStore, together with the 'needClientAuth=true' flag appended to the uri in your transport connector definition.

If you have your certificates ready in a PEM format you can go ahead and create these java stores by first converting your certificates and keys into PKCS2 format, and then importing them into a new keyStore.

openssl pkcs12 -export -in /etc/grid-security/hostcert.pem -inkey /etc/grid-security/hostkey.pem -out /tmp/cert.p12 -name server-cert -CAfile /etc/grid-security/certificates/ca.crt -caname root -chain
/usr/java/latest/bin/keytool -importkeystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -destkeystore /opt/activemq/conf/broker.ks

Similarly, you can create the trustStore with the same commands while supplying the client certificate:

/usr/java/latest/bin/keytool -import -file /etc/grid-security/certificates/rootCA.pem -alias client-CA -keystore /opt/activemq/conf/broker.ts

Note! Make sure to configure the same password for the imported private key and the keyStore which is holding it, otherwise java will not be able to access your private key inside the keyStore.

Note! Newer versions of java have disabled the use of SSLv3 so make sure to use updated client applications with the broker, otherwise SSL will fail.


The main motivation behind securing the broker with client authentication/authorization is to prevent unknown clients from publishing to queues, and also to prevent any client from creating new destinations on the fly. I was only able to implement this restriction by using access control rules.

If you want to have access control rules defined on your clients the client authentication through the 'needClientAuth=true' flag is not enough, because you want to be able to identify individual clients by their DN, and map them into user groups. To achieve this you can use one of the authentication plugins: jaasCertificateAuthenticationPlugin [5] for simple certificate based authentication, or jaasDualAuthenticationPlugin [6] if you want to have a mixed authentication method that also supports password credentials. In this example I will user the jaasDualAuthenticationPlugin. Configuring the simple certificate based plugin is just the matter of removing the lines regarding the password authentication.

To enable the authentication plugin you have to specify the following lines in your broker definition:

   <jaasDualAuthenticationPlugin configuration="CredLogin" sslConfiguration="CertLogin" />

The configuration="CredLogin" refers to configuration using password based authentication, while sslConfiguration="CertLogin" refers to the certification based authentication. These two methods are further defined in /opt/activemq/conf/login.config which is the default location used by the JAAS plugins [7]. The login.config file lets you define the source files of user credentials and groupings for both 'CredLogin' and 'CertLogin'.

CredLogin {
    org.apache.activemq.jaas.PropertiesLoginModule sufficient"""";
CertLogin {
    org.apache.activemq.jaas.TextFileCertificateLoginModule required

Note! You can find out more about the 'sufficient' and 'required' flag at [8]. These are used to let the service fall back from one authentication method to the other, and can be tuned to you liking.

You can define users with password credentials by adding lines of 'username=password' to You can group the authenticated users in by adding lines of 'group=csv-user-list'.

For having authenticated clients over the ssl+stomp endpoint defined at ssl+stomp you will most likely use the certificate based authentication method. You can add users by adding lines of 'username=DN' to file. Note that you should add the DNs of both producers and consumers of the system, since both are going to use the same endpoint. In you can define groups of authenticated users by adding lines of 'group=csv-user-list'. A user can belong to multiple groups. This is useful if you want to create a general group of users containing everybody, and also a more fine grained grouping based consumers and producers. For example can look like:

#general user grouping including every user
#useful for setting general permissions (such as for Advisories)




Authorization rules can be defined on the user groups created in the Authentication section. To do so you have to enable the authorization plugin inside the broker definition [9] :

		     <authorizationEntry queue=">" read="admins,users" write="admins,users" admin="admins" />
		     <authorizationEntry topic="ActiveMQ.Advisory.>" read="admins,users" write="admins,users" admin="admins,users"/>

Here you can define individual authorization entries suitable for your setup. Two entries that you might encounter in every setup are the ones specified above. The first entry using the '>' wildcard is matching any destination. This is useful if you want to define a global access control rule. In the example provided above the 'users' group is not allowed to perform administrative operations on any destination. This restriction will prevent clients belonging to the 'users' group from creating new destinations at will, allowing them to only read/write to existing destinations. The second entry allows full access to every authenticated user to any topic under the 'ActiveMQ.Advisory.>' destination. The Advisory destinations are used by ActiveMQ for connection management [10], and so it needs to be acessible by any client (consumers and producers alike). With a restrictive access control rule that does not allow access to Advisory destinations you might encounter errors when connecting clients to the broker.

The above access control rules will allow any authenticated client to act as both consumer and producer. If you want a finer grained control over who can be a consumer and who can be a producer you can add additional authorization entries for specific destinations. In the rule defined below read permissions are only allowed for the 'consumers' group, while write permission are only allowed for the 'producers' group.

<authorizationEntry queue="CUSTOM.DESTINATION" read="admins,consumers" write="admins,producers" admin="admins" />


Several ways exist to monitor your ActiveMQ broker as an administrator [11]. ActiveMQ allows the use of JMX consoles, although in practice I was unable to attach jConsole to it. You should, however, leave JMX enabled in your broker, otherwise you will encounter problems such as: start-stop scripts not working properly, web console not working properly. You can make sure JMX is enabled by having the useJmx="true" flag in your broker definition.

ActiveMQ also offers a web console deployed in a jetty container from where you can have an overview of the system with details about existing queues, connected clients, pending messages, etc. The web console uses a VM Transport Connector internally to talk to your broker instance. You can find the configuration of this connection at /opt/activemq/webapps/admin/WEB-INF/webconsole-embedded.xml. If you enabled the authentication plugin on your broker, ActiveMQ will require authentication from every connection, even the ones made through VM Transport Connectors. If you don't have the right admin credentials set up and supplied to the web console, it will still work, but you will be unable to use some of the functionality of the web console. Upon startup the web console takes the credentials used to establish the VM transport connection from /opt/activemq/conf/ Note that these admin credentials must be defined at the authentication plugin. Moreover, note that these are not the same as the credentials used to authenticate when accessing the web console itself. For access control on the web console see the /opt/activemq/conf/ file.


Messages arriving to the broker are persisted in the memory and they are lost across broker restarts. With persistence enabled you can have messages that survive broker restart. According to the guide at [12] you have to set the persistent="true" flag in your broker definition to enable persistence. According to these sources [13] [14], however persistence is a property of a single message and the producer is responsible for declaring messages persistent. You can do this in your producers by adding the "persistent:true" flag in the header of the send requests. In my personal experience messages are only persisted if the "persistent:true" flag is found in the message header.


If you restrict the creation of new queues on demand as explained in the authentication/authorization section you will need to create the set of destinations that you allow to be used at startup, because clients cannot connect to nonexistent destinations. Destinations can be created inside the broker definition as such [15] :

   <queue physicalName="CUSTOM" />

Message forwarding with Camel

In some specialized usecases you may want to have messages relayed from your regional broker to a central broker network. I good example of this is how accounting data could be collected both on a regional and on a central level as well. Instead of having producers sending the same accounting messages to a regional and a central broker, the regional broker could take the responsibility of republishing every received messages into the central broker. This can be done in ActiveMQ with the aid of Composite Queues and Camel routes.

Composite Queues

Composite queues in ActiveMQ are virtual destinations where messages can be sent but not read. We can use a composite queue for the producers' destination, and forward messages received internally to two normal queues. One of the normal queues will be the endpoint for regional consumers (REGIONAL), while the other queue will be used to forward messages internally to the central broker (CENTRAL-IN). You can create a composite queue with the following configuration in the broker definition:

           <compositeQueue name="COMPOSITE-QUEUE">
                   <queue physicalName="CENTRAL-IN" />
                   <queue physicalName="REGIONAL" />

Alternatively you could try a similar setup without composite queues, using topics (one-to-many type of relationship between publishers and subscribers). I decided to use queues because of their one-to-one nature that keeps messages until a single consumer shows up, while topics send out messages to all subscribers active only at the moment.

Camel Routing

ActiveMQ comes packaged with Camel as a component, so all you have to do to active it is include the proper configuration for your route inside the activemq.xml file [16]. This will look like the following:

<camelContext id="camel" xmlns="" depends-on="broker">
        <from uri="regional-broker:queue:CENTRAL-IN"/>
        <to uri="central-broker:queue:CENTRAL-OUT"/> 

Note! This configuration should be on the outside of the broker bean definition.

The Camel component will act as a consumer for the regional broker on the CENTRAL-IN queue, and as a producer for the central broker on the CENTRAL-OUT destination. This level of indirection is necessary in a setup where the central broker (just like the regional broker) has only a stomp+ssl outwards facing endpoint defined for producers. In this case the regional broker will appear to the central broker as just another producer. You should add the depends-on="broker" flag, where "broker" is the id of the regional broker bean defined previously.

Next you have to define the camel beans for the 'regional-broker' and the 'central-broker'.

<bean id="regional-broker" class="org.apache.activemq.camel.component.ActiveMQComponent" depends-on="broker">
   <property name="brokerURL" value="vm://regional-broker?create=false&waitForStart=5000"/>
   <property name="userName" value="camel-username"/>
   <property name="password" value="camel-password"/>

<bean id="central-broker" class="org.apache.camel.component.stomp.StompComponent">
   <property name="brokerURL" value="ssl://central-broker:central-broker-port"/>

In order to create the 'regional-broker' bean you can follow the guide at [17], which outlines the major configurations needed to configure ActiveMQ with an internal VM transport connector (vm://regional-broker). The VM transport connector is an JVM internal connector through which Camel consumes messages from ActiveMQ (CENTRAL-IN). Additionally, if you have the authentication plugin enabled, you also have to define some user credentials for the Camel component (camel-username, camel-password). You can do this by following the authentication section.

Creating the 'central-broker' bean turn out to be a little bit trickier, because the class="org.apache.activemq.camel.component.ActiveMQComponent" cannot talk over STOMP [18]. You can use class="org.apache.camel.component.stomp.StompComponent" instead, which is not part of the current ActiveMQ distribution, so it needs to be added to the installation manually. You can do this by downloading the latest camel-stomp-2.14.1.jar [19] and stompjms-client-1.19.jar [20] and adding them to /opt/activemq/lib/camel/. Once you have the necessary dependencies you have to define the remote endpoint of the central broker (ssl://central-broker:central-broker-port) which expects message producers on its stomp+ssl endpoint on the destination CENTRAL-OUT.

Some issues with passing the right SSLContext have been revealed during this setup, outlined at [21]. A ticket has been submitted which will solve this in the future [22]. In the meantime, you can work around it by setting the right SSLContext from the command line while creating the JVM as suggested in the problem description.