Messaging patterns refer to the solutions and infrastructure needed to connect components and services in the cloud in order to guarantee scalability, correct processing, failure control and synchronization in distributed systems.
Asynchronous messaging is the most widely used because it has many benefits, but it also has many difficulties.
The patterns used are as follows:
Asynchronous request-response patterns:
This pattern is used when we need to carry out long processes or that will not give an immediate response to the applicant. For example, when there are some infrastructure limitations, or when we need to do a long process in the backend or when there are some firewall limitations. In these cases what the pattern does is to prepare a flow that indicates that the backend responds with a 202 accepting the request and answering a process query endpoint and then it puts in a kind of pool the processing of all the requests. The requestor consults the process or is notified when the process is completed. It is common to use this to avoid generating blockages or overloading of requests or even timeouts. This pattern allows you to quickly release and notify or respond when the process is finished, avoiding blockages.
Claim verification patterns:
This pattern is used for cases where we need to send a lot of information or too much information, such as files. But messaging systems are indicated to use small messages, avoid congestion, and use few resources. Sending large messages would bring many conflicts, even, a lot of expense. So the solution is to use an external load, for example, we need to upload a file and process it to generate an output. To the message system we send a message with all its information, except the file and we indicate where that file is going to be, on the other hand, we process the file (it is saved somewhere) and then when the process is finished or it is necessary to use the result of the file, the message has the necessary information to obtain it and do the processing.
This type of pattern serves to remove responsibility from each point in a flow of operations (usually in microservices) and they do not need to know which flow they belong to or the whole operation, but they do their function and complete their part in the operation, then an orchestrator is in charge of controlling the whole flow and the whole operation. On the one hand it reduces the coupling between services, but increases the coupling between a service and the orchestrator. An orchestrator is a difficult structure to maintain and design. But there is another way to organize the choreography with asynchronous messages, where the messages are added to the queue and each service subscribes to the type of message it can respond to, if it is correct after processing it, it returns to the queue and continues, if it fails or a failure message is sent or the circuit is cut. By using it this way, we avoid coupling and bottlenecks.
Competing consumer patterns:
This pattern is used to improve scalability, response times, availability and load. Let's suppose that in peak hours we have thousands of thousands of requests and we need to attend them quickly, if a single point is in charge of processing them, it would obviously collapse. Then with this pattern what we do is to create multiple consumers (one for each request) and one response instance to attend each consumer, this way, we release a single entry point and attend the consumer who made the request in an isolated way. For example, we could have a message queue that accepts any type of request once a message comes in, identifies the type of request and then requests who will attend this request and so on for each incoming request, as each instantiation processes the response, the response will be returned to each applicant. This pattern is normally used in large simultaneous requests that can be processed in parallel.
Tubes (or pipelines) and filter patterns:
This pattern is used to decouple functionality that can be atomized into different modules. Let's suppose we have 2 modules that do one 20 tasks and another 30 tasks and in the final result they are different processes, but internally both have some tasks that do the same. Let's suppose that some of the common tasks are to send an email and keep in a log. If we separate the modules and atomize them by task we could identify each atomized task so it does a specific task (filter) and make them part of a flow (pipe). Then, when we want to do the tasks of the 1st module, we create messages so that each part of the flow processes its particular task. With this we reduce the processing time of the whole module, we don't create bottlenecks, we separate functionality that can be quickly scaled and maintained and we avoid monolithic processes and repeated tasks. It also allows us to separate processes, filters and pipes into infrastructure.
Queue priority patterns:
This pattern is used to prioritize the processing of requests, not by order of arrival or time, but by priority settings. For example, when we have a backend that must give priority to requests of a monetary transaction before the balance query, what is done is to create a message queue that during a period of time checks and orders the requests by priority, even in some cases, it is used to send in the background requests that are not necessary to respond at that time.
This pattern is used when it is necessary only to notify subscribers of an event/process of a message and have them process it. For example, suppose that after a file uploaded from a website is saved to the server, it needs to be processed. The processor subscribes to the successfully uploaded event or file publisher and when it receives a notification it performs its tasks. It is ideal when we have distributed processes and we need to notify the parties involved when a process ends and it is necessary for another to continue. This allows us to effectively decouple and achieve processing independence and parallelism and asynchronicity. The messages are normally entered into a queue or a bus that notifies the corresponding subscribers.
Queue based load leveling patterns:
This pattern is used when we use a message queue as a buffer. When a service that has many requests and load can cause failures or intermittencies, it is necessary to create a message queue that processes the messages based on the load of the service. Each message will be processed once the service guarantees the availability and the adequate load to process, allowing for an even load and amount of requests.
Planner supervisory agent patterns:
This pattern is used to control a flow and synchronize. It is used for fault control and not to break a flow or a complete set. If something fails, it knows what to undo or reprocess or retry or if it is successful how to complete the flow. It is normally used when there are remote processes or lagacy or other systems. It usually consists of a scheduler that organizes the tasks and plans how they should be executed (in a message queue), a supervisor that is responsible for controlling the entire process and an agent that is responsible for each particular task.
Sequential convoy patterns:
This pattern is used to process process sets without blocking other process groups. For example, if we use some other strategy or pattern to scale, sometimes, it is necessary to process something or a set of processes in a certain order or situation, so it is necessary to group or categorize in order to process that set first before another one without breaking other strategies or patterns. Because for example, in a case where it is processed by load, but it is necessary to process a creation message first before an update message (even if it is heavier on the 2nd).