@@ -131,14 +131,19 @@ Topic: name PartitionCount: 3 ReplicationFactor: 3 Configs: cleanup.policy=compa
131131 .to(" name-formatted" )
132132` ` `
133133
134- Put two messages on the ` name` topic with the same key
134+ Put two messages on the ` name` topic with the same key when the application is stopped.
135135
136136` ` ` shell
137137tom perks
138138tom matthews
139139` ` `
140140
141- If you run the application now as expected it will process both messages.
141+ ` ` ` shell
142+ Processing tom, perks
143+ Processing tom, matthew
144+ ` ` `
145+
146+ Then run the application, as expected it will process both messages.
142147
143148` ` ` shell
144149docker exec -it kafka-3 kafka-streams-application-reset --application-id OrderProcessing \
@@ -147,7 +152,7 @@ docker exec -it kafka-3 kafka-streams-application-reset --application-id OrderPr
147152 --zookeeper zookeeper-1:22181,zookeeper-2:22182,zookeeper-3:22183
148153` ` `
149154
150- Now lets add a join to itself using the KTable.
155+ Now let ' s add a join to itself using the KTable.
151156
152157```shell
153158 val nameKTable = streamsBuilder
@@ -165,8 +170,7 @@ Now lets add a join to itself using the KTable.
165170 .to("name-formatted", Produced.with(Serdes.String(), Serdes.String()))
166171```
167172
168- Now if we (inner) join the stream to the table and send these messages and then start the application up we more
169- messages
173+ Now if we (inner) join the stream to the table and send these messages and then start the application up.
170174
171175```shell
172176zara:a
@@ -176,7 +180,7 @@ paul:a
176180```
177181
178182We now get a result of processing just the last for a key. Interestingly the last message is processed first, most
179- likely due to the compaction.
183+ likely due to the compaction and partitioning .
180184
181185```shell
182186Processing paul, a
@@ -185,7 +189,7 @@ Processing zara, c
185189Joining the Stream Name c to the KTable Name c
186190```
187191
188- Now if we left join the stream to the table itself and we put two messages
192+ Now if we left join the stream to the table itself and we put two messages and start up.
189193
190194```shell
191195zara:d
@@ -194,7 +198,7 @@ zara:f
194198paul:b
195199```
196200
197- As expected a left join makes no difference.
201+ As expected a left join makes no difference same result as before .
198202
199203```shell
200204Processing paul, b
@@ -235,6 +239,7 @@ If we were to rekey and join with a different key how are the semantics well let
235239 .to("name-formatted", Produced.with(Serdes.String(), Serdes.String()))
236240```
237241
242+ Put these messages onto the compact topic `name` whilst the application is down.
238243```shell
239244sarah:mark1
240245mark:sarah1
@@ -243,7 +248,7 @@ sarah:mark3
243248mark:sarah2
244249```
245250
246- Results are that we take the latest value like above of the tables and only process that on startup .
251+ Results are that we take the latest value like above of the tables and only process that on start up .
247252
248253```shell
249254Processing sarah, mark3
@@ -252,7 +257,6 @@ Processing mark, sarah2
252257Joining the Stream Name mark3 to the KTable Name sarah2
253258Joining the Stream Name sarah2 to the KTable Name mark3
254259
255-
256260OutputTopic >
257261sarah2
258262mark3
@@ -295,7 +299,8 @@ docker exec -it kafka-3 kafka-console-producer --broker-list kafka-2:29092 --to
295299```
296300
297301This results in processing all three messages on the stream but no joins successful. Behaviour falls in line with it not
298- waiting to populate the table and streaming all messages.
302+ waiting to populate the table and streaming all messages. The timestamps are not matched as we send the table last-name
303+ topic after the streaming messages so they are not joined.
299304
300305```shell
301306Processing 2, mark
@@ -319,13 +324,13 @@ Processing 1, matthew
319324
320325If we send a last name then a first name like so
321326
322- Last name
327+ Sending on the `last- name` topic.
323328
324329```shell
3253303:last
326331```
327332
328- First name
333+ Then send on the `first- name` topic.
329334
330335```shell
3313363:first
@@ -338,7 +343,10 @@ Processing 3, first
338343Joining the Stream First Name first to the KTable Last Name last
339344```
340345
341- This is due to the timing semantics of KTable. Lets put another first name with the same key.
346+ This is due to the timing semantics of KTable where they are event times so this case the default Kafka broker event
347+ time.
348+
349+ Lets put another first name with the same key.
342350
343351Now lets do it with a GlobalKTable I would expect the GlobalKtable to pause execution until populated and then join
344352successfully but still stream all keys.
0 commit comments