Architecture
Messages processing
Queues are provided on queue microservices, which are native microservices (so it has a standard configuration.yml file) with additional configuration in queues.yml file, where queues definitions are provided:
Every queue consists of innerQueuesAmount number of inner queues. At any given moment one inner queue is active for writing, into which QUEUE entry point (provided by a native microservice) puts asynchronous requests (messages) from clients. The rest of inner queues are opened in read mode for messages processing (routing and executing on the target microservices by delegators).
After waitTimeBetweenCheckingTaskReadyToStartInMillis period of time (in milliseconds) Queue Manager opens the next inner queue for writing (closed for message processing), while the previous one is opened in read mode and messages processing is started. Messages are processed by threadAmount number of threads, no sooner than delaySendProcessTaskToExecuteInMillis period of time (in milliseconds). On each thread the message handler - delegator, tries to send a message to the target microservice in maxSendProbeAmount number of attempts, if it's not succeed the message are discarded.
Queues are also capable of receiving and storing responses to previously sent messages to microservices. They wait howLongTaskInputShouldBeOnQueueWithoutResultItInMillis milliseconds of time waiting for the response, if the time is exceeded, messages are discarded. if the result comes with en exception (identified through regular expression by exceptionStringToRepeat string) the associated request will be repeated maxAcceptExceptionAmount times.After receiving the answer, it waits for howLongTaskResultShouldBeOnQueueWithoutDownloadItInMillis until it is retrieved by the client from the queue.
Reactive data flows
Lets assume that web application WebApp1 makes an asynchronous request to the application microservice app_A using SIMLPLE queue located on queues_1 queue microservive. It uses reactive communication schema, as show on the following diagram:
The whole process has the following stages:
- The web application
WebApp1sends a asynchronous request to a service located on theapp_A. It usesSIMPLEqueues onqueues_1and registers callback function for the response. The communication is done through QUEUE entry points on Main Server. - The
WebApp1receives confirmation fromqueues_1of accepting the request for processing by theSIMPLEqueue (through Main Server), after which the associated threat with the request is released.WebApp1threats are not engaged in the response handling until it's ready on a queue. The response handling byWebApp1is described in the next points (7,8). - The
SIMPLEqueue handles the request and through the delegatorrepeatsAmountByDelegatortimes tries to find the available instance ofapp_Amicroservice that should process the request. If it not succeeded the queue delegator waitstimeToWaitBetweenRepeatProbeInMillisByDelegatorperiod of time to try again. If it is succeeded the delegator sends the request to the target microserive using QUEUE entry point on Main Server. - The
app_Amicroservice sends confirmation toqueues_1microservice of accepting the request for processing (the communication between a queue and a target application is also asynchronous). - After the requests is processed the delegator located on QUEUE entry point of the target application microservice tries
repeatsAmounttimes to find available instance ofSIMPLEqueue anqueues_1microservice and send the response. If it not succeeded the queue delegator waitstimeToWaitBetweenRepeatProbeInMillisperiod of time to try again. If it is succeeded the delegator sends the response to the queue using QUEUE entry point on Main Server. These parameters are defined in the target application microservice configuration file, see this chapter to get more information. - The
queues_1microservice confirms successful reception of the response. - If a callback function has been registered during sending the request by
WebApp1microservice the process of sending back the response to the client is performed. It engages a new threat from the thread pool, which executes the callback function. The response waitshowLongTaskResultShouldBeOnQueueWithoutDownloadingItInMillisperiod of time to be sent to the client. After this time, the response are discarded by the garbage collection process. - The confirmation of receiving the response is sent by the client (
WebApp1) toqueues_1microservice.
Of course, these reactive flow works fine in case where are many nodes with different set of components - in distributed architecture. For example where web applications, queues and applications are located on different set of nodes (at least two) as shown on the picture below:




