Message Processor Coordination Support in WSO2 ESB

INTRODUCTION
In general Message Processor is used to achieve guaranteed delivery where messages sent to the JMS queue never get lost even though the back end is down. In case of a failure of the backend, the message processor keeps on retrying sending message to the endpoint a specified number of attempts (defaults to 4) and deactivates itself afterward. It never removes the message from the queue until it is dispatched to the backend successfully. Also message processor ensures in order delivery of messages to the backend. In this article I am going to explain you Message Processor coordination support in WSO2 ESB where we go a step beyond in order to achieve high availability and scalability. Coordination support of the Message Processor comes into play when you run it in cluster mode. Therefore we need to have a cluster setup with one manager and two worker nodes with us to move on further. You may refer my article [1] to create a cluster setup which is a necessary prerequisite to follow this article. Also you may need to have some basic understanding on Message Processor component in WSO2 ESB standalone mode. For that you may read the article [2]. Coordination support of the message processor yields high availability and high scalability in a production setup.

PREREQUISITES
To follow this article you need to satisfy following requirements.
  • Have a worker/manager cluster setup which consists of one manager and two worker nodes [1].
  • Have some basic understanding on how Message Processor works in standalone mode [2]

WORKSHOP
First of all we need to set up message broker and make it up and running. In this article I will use Apache ActiveMQ message broker. Alternatively you may use any message broker that you are familiar with. But you may have to deal with all the specific configuration related oddities specific to the message broker you are using in case if you use a different one.


I am going to use Apache ActiveMQ 5.5, which is a quite stable release and can be downloaded from [3]. Extract Apache ActiveMQ 5.5, which you downloaded [3] into some location in your directory structure which will be referred to as AMQ_HOME hereafter. Then move on to the ACTIVEMQ_HOME/bin directory and execute the following command to start the message broker.


./activemq console


Once the message broker starts you may access its Admin console via following url.




Then we need to copy ActiveMQ client jar files from <AMQ_HOME>/lib directory to <ESB_HOME>/repository/components/lib directory [4]. You need to copy above client jar files to all the ESB nodes in the cluster (one manager and two worker nodes in this instance). Also before moving forward let us enable DEBUG logs for synapse. To do so locate log4j.properties file which reside inside the ESB_HOME/repository/conf directory and locate the entry ‘log4j.category.org.apache.synapse’, change it’s value to DEBUG. We enable debug logs just for educational and informative purposes. In production systems this is not recommended.


Then move on to the ESB_WORKER1/samples/axis2Server/src/SimpleStockQuoteService, and execute the following command to deploy SimpleStockQuoteService into our axis2Server.


ant


Then move on to the ESB_WORKER1/samples/axis2Server and start the backend server by executing the following command.


./axis2server.sh


Use Case Scenario 1
The first scenario which we are going to elaborate here is depicted below.

MPCoordinationSupport - Scenario 1.png


Figure 1: Running Message Processor with task count 1

In this sample,
  1. Client sends a message to the worker 2 node in the cluster
  2. Then the message is taken up by the proxy service deployed on that worker node. The proxy will place the message in the JMS message store using the store mediator
  3. The message processor listening to that message store will get the message
  4. Finally the message processor forward the message to the backend service

Now let us go ahead and implement this scenario. Start all the nodes in your cluster as mentioned in article [1]. Then log into the manager node’s administrative web console using the following url.


The synapse configuration for endpoint, message store and proxy service are given below. You may directly copy them into your source view and save them via manager node’s web interface.



Now let us go ahead and create the Message Processor (Scheduled Message Forwarding Processor). The synapse configuration for Message Processor is given below. You may either copy the configuration directly into the source view of manager nodes web interface or create one using the design view by your own. Either way is fine.



Also take a note that value of the member.count parameter which defaults to 1. That tells how many tasks are running behind this message processor. A given task may run on any worker node of the cluster.


Since we have only one task it will run on any one worker node in our cluster. We have enabled DEBUG logs for synapse level, so you may see the following log being printed continuously in worker node where the task runs.

DEBUG - ForwardingService No messages were received for message processor [Processor1]
DEBUG - ForwardingService Exiting the iteration of message processor [Processor1]
[2015-09-16 16:43:22,666] DEBUG - ForwardingService Exiting service thread of message processor [Processor1]
DEBUG - ForwardingService No messages were received for message processor [Processor1]
DEBUG - ForwardingService Exiting the iteration of message processor [Processor1]
DEBUG - ForwardingService Exiting service thread of message processor [Processor1]

A sample screenshot of a message processor running in a custer (1 manager, 2 worker) with task count 1 is given below. Also you may note that the message processor task(s) never get executed in the manager node. Worker/manager setup clearly separates the concerns/responsibilities. Manager node is responsible for adding, deploying, editing and deleting ESB artefacts where workers are responsible for serving client requests. That is the reason why message processor tasks are not getting executed in manager node.

MP-coordination-scenario-one.png
Figure 2: Runtime snapshot of Message Processor backed by one task in a cluster setup

Now we’ll send a request to this proxy service and check the end to end flow. The sample payload that I am using is given below.





Save it so a file named placeOrder.xml, and move into the directory where this placeOrder.xml file resides in your file system. Then execute following curl command to send a request to the Proxy service.


curl -v -d @placeOrder.xml -H "Content-Type: text/xml; charset=utf-8" http://esb.wso2con.com:8281/services/Proxy2

Also take a note that by merely changing the port number (8281 here), you may point the request to different worker nodes in your cluster. Change the port number according to your setup.

Now you may see that the message is dispatched to the backend. If you check the ActiveMQ admin console you may verify that the message has been delivered. Also in ActiveMQ admin console you may see that there is one JMS consumer to our queue at the moment. Send few more requests of the same kind and check whether they are getting delivered to the end point successfully. Now our message processor is working fine.


Now shutdown the worker node where the task runs. Automatic failover takes place and the task is getting scheduled to the other worker node in the cluster. This yields high availability in your production setup. Send few more requests to make sure the message processor is sending messages to the backend. After that verification you may start the worker node which you shutdown little while ago.


Let us try to deactivate it via manager node’s web interface. Click on Message Processors link under main menu which will list down all the message processors. Just deactivate the message processor which we created above by clicking on the Deactivate action. Upon deactivation, the task execution is paused, hence number of JMS consumers become zero in ActiveMQ side. Now you may not see any logs being printed. Again reactivate the message processor and send few messages as above.




Use Case Scenario 2
In this scenario we are going to run message processor with two tasks behind it. Click on the Message Processors section under main menu. Choose the message processor we created above and click on Edit. Locate the Task Count parameter under Additional Parameters section, change it to 2 and save the message processor. The following diagram depicts this scenario.

MPCoordinationSupport - Scenario 2.png  

Figure 3: Running Message Processor with task count 2


In this sample,
  1. Client sends a message to the worker 2 node in the cluster
  2. Then the message is taken up by the proxy service deployed on that worker node. The proxy will place the message in the JMS message store using the store mediator
  3. One of the message processor tasks listening to that message store will get the message. In the diagram above, message processor task running on worker 1 picks the message.
  4. Finally the message processor task forwards the message to the backend service


Now the message processor is backed by two tasks. According to the diagram one task is running on worker1 whereas the other task is running on worker2. But there is no such guarantee. Sometimes both the tasks may get scheduled into the same worker node. In that case you may just edit the message processor and save it, leading to another round of task scheduling. That may  schedule two tasks on two worker nodes. But still there’s no guarantee. The bottom line here is a given task will run in any worker node in your cluster. That is the contract here.

A sample screenshot of a message processor running in a custer (1 manager, 2 worker) with task count 2 is given below. Here you may notice that two tasks are running on two different worker nodes in our cluster.


MP-coordination-scenario-two.png
Figure 4: Runtime snapshot of Message Processor backed by two tasks in a cluster setup


Now let us send few messages and see whether they are delivered to the backend successfully. You may use the above curl command to send few requests. You may notice that those messages are delivered to the endpoint successfully. Also you may note that there are two JMS consumers at the ActiveMQ admin console, one per each task.


This task count feature increase the scalability of your system in a cluster setup. For an instance, having two tasks running in two different worker nodes will increase the throughput of your system. You may add workers and increase number of tasks behind the message processor dynamically such a way that it can scale to more client requests.


Now just deactivate the message processor as above and you may notice that both the task executions are paused. This is where the coordination support becomes handy. When you deactivate the message processor all the tasks behind it getting paused despite where in the cluster they are running at the moment.


Again reactivate the message processor and send few messages using the above curl command to verify it’s correct behaviour. When you reactivate all the tasks backing the message processor resumes.


CONCLUSION


In this article I explained you about Message Processor coordination support in detail. I have started with an introductory section where prerequisites necessary to follow this article and advantages of the Message Processor coordination support is mentioned. Then in the workshop section I have walked you through two main use case scenarios related to message processor coordination support. That takes us towards the end of another article. The main benefits that the coordination support yields to a user is high availability and scalability.


References


Comments

Popular posts from this blog

Introducing Java Reactive Extentions in to a SpringBoot Micro Service

Optimal binary search trees

Edit distance