AI News, Lightbend to Showcase Best Streaming Data Practices for Machine ... artificial intelligence
- On 13. januar 2020
- By Read More
Reactive in practice, Unit 12: Conclusion and summary
When first launched, the new Walmart Canada platform needed to be responsive on a multitude of levels: While seven years old now —- which is dog years in technology —- the refreshed Walmart Canada platform as it originally launched with Scala, Play, and Akka, is a case study worth studying in the reactive community.
Tools such as Lagom are built with these core principles in mind, which is how the latest breed of reactive frameworks reduces the overhead involved of having to build out your own reactive plumbing.
A Reactive System must adapt to a changing business environment, operational environment, and development landscape, all within the constraint of working with existing systems that need to be evolved rather than replaced.
Our choices are very intentional, which is necessary for commercial projects that will require a certain level of support, security, indemnification, commercial terms, and so forth.
This final unit summarizes the architecture we have demonstrated over the course of the entire series and outlines some next steps to consider for those about to embark on a real-world reactive project.
While internalizing the lessons in this series, take comfort in the fact that many of these techniques are proven in systems that handle massive amounts of business every single day.
In Unit 1, we outlined a technique for designing event-driven systems called event storming, along with the proven modeling technique of DDD (domain-driven design).
Event modeling is a lightweight design technique that adds additional structure to an event storming exercise, specifically by creating swimlanes for each bounded context, with personas and external systems in separate interaction swimlanes.
One of the most valuable side effects of event storming and DDD is that the models produced are comprehensible by a diverse set of stakeholders, including UX and UI team members.
In Unit 3, we laid out the basics of how commands and services interact within each other, along with patterns like Algabraec Data Types (ADTs), which will help to ensure the Java compiler handles as many edge cases as possible when processing trade requests without having to resort to extensive testing.
Rather than use if-else-then statements to handle various types of trades (market, limit, stop-limit), we used the visitor pattern along with ADTs to guarantee that all cases are handled, and a compiler error will be thrown if a type of trade is added without the appropriate handling logic.
We outlined the effective application of event sourcing to eliminate the need for mutable data structures, which helps to ensure that entities are fully persistent and can be recovered in the event of a crash or restart.
In Unit 6, we covered the relationship between commands and queries, and, more specifically, how to separate the command path from the query path in order to effectively optimize our system.
common cause of latency in modern systems is the improper handling of complex queries in real-time using the same threads as those serving requests and handling commands.
By separating reads and writes, including the full separation of read models from write models, we can individually optimize reads and writes.
Once we fully separate the command channel from the query channel, we can begin to significantly optimize the performance, reliability, and latency of queries (Unit 7).
Separating commands from queries can radically reduce query latency by orders of magnitude as we move processing off of the main UI threads and into asynchronous worker threads.
Through the use of Lagom read-side processors along with Cassandra (Unit 8), we can support the handling of long-running, multi-stage transactions without resorting to brittle techniques such as distributed two-phase commits or ThreadLocal techniques and sticky sessions.
Using the Lagom PubSub API (Unit 10), we can subscribe to raw events or changes to read-side models from within a Lagom service, and then, using Reactive Streams, push the data to any interested external party (such as our Vue.js UI).
In summary, the architecture of Reactive Stock Trader delivers the benefits of enhanced runtime performance through optimization techniques such as CQRS, increased resilience through cluster-aware microservices, and the productivity benefits of working with a single logical system.
After a command is validated, we have two mechanisms to emit status back to the UI or other interested systems (which are not mutually exclusive): The following diagram captures this flow as a pattern that applies to almost every command in Reactive Stock Trader, and almost any command that will be added in the future.
Using the data pump pattern implemented by a collection of ReadSideProcessor instances, you can set up any number of read-side processors that will work in the background to monitor changes in the journal and act appropriately to keep views up to date.
In order to keep our system as performant as possible, Lagom stores every active entity in memory and looks it up whenever necessary without having to perform an expensive SQL query or wait for IO latency.
If an entity becomes infrequently accessed, it is passivated by Lagom, which means that it is removed from memory but is available upon request by restoring the entity’s current state from the event log.
This brings the performance benefits of in-memory storage together with the cost savings of storing entities in cheaper forms of storage such as SSDs and spinning disks.
Developers who plan to work with Lagom in a production environment should become familiar with the concepts of cluster sharding and understand how Lagom manages entities under the hood in order to successfully support a production system.
The Reactive Stock Trader architecture streams events by connecting Lagom services with Play controllers using Reactive Streams, and then seamlessly connects Play with Vue.js over a WebSocket connection using built in convenience methods within Play to treat a WebSocket connection as a sink for streams.
Each bounded context is expected to publish interesting events by default, giving other bounded contexts the opportunity to subscribe to domain events.
By intermediating direct service-to-service connections using Kafka via pub-sub, we greatly simplify future maintenance and also create a durable record of events that influence the behavior of each microservice.
In the example above, we have the following properties: Within a system networking boundary, each microservice (bounded context) will expose both a REST endpoint and also a number of Kafka topics.
Authentication, authorization, and other core policies should be a huge focus of work in the BFF tier, along with protocol translations and any other plumbing code that will help to keep microservices focused on pure business logic.
Now that we have a general idea of our architecture, along with a complete refresher of the contents of the entire series, let’s take some time to cover additional considerations before deploying a reactive system to a real production environment.
As we covered in Unit 11, Minikube is a great way to gain an understanding into all of the steps involved in a real deployment, but there are many other deployment topics to cover —- too many topics for this series.
“Istio is open technology that provides a way for developers to seamlessly connect, manage, and secure networks of different microservices — regardless of platform, source, or vendor.”
It also improves the quality of service within a microservice-based system by making it easier to properly configure security, encryption, authorization, and rate limiting options.
For those who plan to run Cassandra on cloud hardware, it’s important to use the largest instances with local storage wherever possible (within cost constraints).
The deepest depths of serialization is out of scope for this series, but we would like to put it on your radar as an important topic that will need to be explored well before a production deployment, and absolutely before your microservices system grows beyond a few simple bounded contexts.
short series of recommendations are as follows: There is no single right answer to serialization, so you’ll need to explore the various options at your disposal and then work within the bounds of Lagom to configure your serialization strategy appropriately.
Understanding the health of your Kubernetes pods, the state of connections between them, congestion of inter-service network connections, and so forth are all important indicators of the application in the context of the deployed infrastructure.
Kevin brings almost two decades of programming and architecture experience in Java and the JVM to projects, with a specialization in training and development in areas such as event-driven architecture, microservices, cloud-enablement, and machine learning enablement.