You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/02-architecture.adoc
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,8 @@ The event-driven flow enables eventual consistent collaboration and state synchr
88
88
89
89
A usual flow may look like:
90
90
91
-
1. An end-user application sends an _HTTP_ request to the _Event Mesh_. Such message can be understood as a _Command_ type event.
91
+
1. An end-user application sends an _HTTP_ request to the _Event Mesh_.
92
+
Such message can be understood as a _Command_ type event.
92
93
2. The _Event Mesh_ (Broker) persists the event in a queue (like an Apache Kafka topic, but the implementation is hidden from the user).
93
94
After _Event Mesh_ persists safely the data, it returns a successful _HTTP_ response with the `202 Accepted` return code.
94
95
At this point, the operation could already be considered successful, from the end-user point of view.
@@ -155,7 +156,7 @@ The _Event Mesh_ pushes the events as _CloudEvents_ encoded as _REST_ messages.
155
156
The exponential backoff algorithm used by _Event Mesh_ is configurable.
156
157
It uses the following formula to calculate the backoff period: `+backoffDelay * 2^<numberOfRetries>+`, where the backoff delay is a base number of seconds, and number of retries is automatically tracked by the _Event Mesh_.
157
158
158
-
A dead letter sink can also be configured to send events in case they exceed the maximum retry number, which is also configurable.
159
+
A dead letter sink can also be configured to send events in case they exceed the maximum retry number, which is also configurable.
let drv = self.repo.get(&calc_fee_intent.entity.driver_id).await?;
117
121
118
-
let drv = self.repo.get(&fee_event.entity.driver_id).await?;
122
+
let fee = drv.calculate_fee(&calc_fee_intent.entity.transit_price); // <2>
119
123
120
-
let fee = drv.calculate_fee(&fee_event.entity.transit_price); // <2>
124
+
log::debug!("fee value: {:?}", fee);
121
125
122
-
let fee_event = DriverFeeEvent {
123
-
driver_id: fee_event.entity.driver_id,
126
+
let driverfee_event = DriverFeeEvent {
127
+
driver_id: calc_fee_intent.entity.driver_id,
124
128
fee,
125
129
}; // <3>
126
130
127
-
let mut builder = fee_event.to_builder(); // <3>
131
+
let mut builder = driverfee_event.to_builder(); // <3>
128
132
if let Some(id) = subject {
129
133
builder = builder.subject(id);
130
134
} // <3>
@@ -140,10 +144,10 @@ impl Service {
140
144
141
145
In the above code, we are doing the following:
142
146
143
-
<1> We are parsing the internal, business logic, fee event from the _Cloud Events_ envelope.
144
-
<2> We are calculating the fee for this event, using some business logic.
145
-
<3> We are wrapping the calculated fee into the _Cloud Events_ envelope.
146
-
<4> We are sending the fee back to the _Event Mesh_ using _HTTP REST_ client.
147
+
<1> We are unwrapping _Cloud Event_ envelope into an internal, domain, fee value object.
148
+
<2> We are calculating the fee value using some domain logic.
149
+
<3> We are wrapping the calculated fee value into a new _Cloud Event_.
150
+
<4> We are sending the fee, as _Cloud Event_, back to the _Event Mesh_ using _HTTP REST_ client.
147
151
148
152
Of course, in order for this method to be called, we need to route the event from the HTTP listener:
149
153
@@ -238,13 +242,13 @@ spec:
238
242
239
243
[IMPORTANT]
240
244
====
241
-
The policy is `+exponential+`, and the `+retry+` is 10, which means that after approximately 6 min and 50 sec the event will be dropped.
245
+
In our example, the policy is `+exponential+`, and the `+retry+` is 10, which means that after approximately 6 min and 50 sec the event will be dropped (or routed to the `+deadLetterSink+` if configured).
242
246
====
243
247
244
248
[NOTE]
245
249
====
246
250
A `+deadLetterSink+` option could be configured for the _Broker_ to send the events that failed to be delivered in time to a back-up location.
247
-
Events captured in a back-up location can be re-transmitted into the _Event Mesh_ later.
251
+
Events captured in a back-up location can be re-transmitted into the _Event Mesh_ later by reconfiguring the _Mesh_ (after resolving the outage or deploying a bug fix).
<1> Notice, we are just invoking the `+calculateDriverFee+`, that doesn't return anything.
278
282
It's asynchronous.
279
-
<2> We are using the `@EventListener` annotation to listen for the business events withing the applition.
283
+
<2> We are using the `@EventListener` annotation to listen for the domain events within the application.
280
284
Don't confuse this with _Cloud Events_ that are sent and received outside the application.
281
285
<3> The exact fee is calculated by the _Drivers_ module, and we'll be notified later, with the `+driverFeeCalculated+` method.
282
286
====
283
287
284
-
To make it work, we need to add a new _Cloud Event_ sender and listener.
288
+
To communicate with the _Event Mesh_, we need to add a new _Cloud Event_ sender and listener.
285
289
That's being done similarly, as in the case of _Rust_ application.
286
290
287
291
Below, you can see how you may implement the _Cloud Event_ sender:
@@ -391,13 +395,13 @@ public class CloudEventReceiver {
391
395
}
392
396
----
393
397
394
-
<1> We unwrap the _CloudEvent_ into our custom event type
398
+
<1> We unwrap the _CloudEvent_ into our domain event type (in the example that's the `+DriverFeeCalculated+` type)
395
399
<2> And publish it withing the application, using the framework's _EventsPublisher_ implementation.
396
-
The event will be transmitted to the methods annotated with `@EventListener`.
400
+
The domain events will be transmitted to the methods annotated with `@EventListener`.
397
401
398
402
[CAUTION]
399
403
====
400
-
Don't confuse the the framework's _EventsPublisher_ with _Cloud Event_ sender and receiver.
404
+
Don't confuse the framework's _EventsPublisher_ with _Cloud Event_ sender and receiver.
401
405
====
402
406
403
407
==== The wiring of our _Event Mesh_
@@ -765,7 +769,8 @@ The OpenShift Container Platform provides can provide a clear visualization of o
765
769
image::solution-odc.png[width=100%]
766
770
767
771
The console shows two sink bindings on the left, and they are feeding the events from the applications to the _Broker_ (depicted in the center).
768
-
on the right, you could see the two applications deployed as _Knative_ services, and two triggers (as lines) that configure the _Event Mesh_ to feed appropriate events to the applications.
772
+
The _Broker_ is the centralized infrastructure piece that ensures a proper decoupling of the services.
773
+
On the right, you could see the two applications deployed as _Knative_ services, and two triggers (as lines) that configure the _Event Mesh_ to feed appropriate events to the applications.
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/developer-resources.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,8 @@
6
6
7
7
== Developer Resources
8
8
9
-
* https://github.com/cardil/cabs-usvc[Demo Source] _(The example code used in this solution)_
10
-
* https://youtu.be/Rc5IO6S6ZOk[youtu.be / Rc5IO6S6ZOk] _(The talk that served a base for this solution)_
9
+
* https://github.com/cardil/cabs-usvc[Demo source code] — _The example code used in this solution, based on the https://github.com/legacyfighter/cabs-java[LegacyFighter Java app]_
10
+
* https://youtu.be/Rc5IO6S6ZOk[Let's get meshy! Microservices are easy with Event Mesh] — _The talk that served a base for this solution_
11
11
* https://www.redhat.com/en/technologies/cloud-computing/openshift/serverless[Red Hat OpenShift Serverless]
0 commit comments