Kafka Error Handling with Spring Boot | Retry Strategies & Dead Letter Topics | JavaTechie
Vložit
- čas přidán 29. 02. 2024
- #JavaTechie #Kafka #SpringBoot #ErrorHandling
👉 In this Video, We will understand how to handle error in Kafka using retry and DLT (Dead Letter Topic) with Realtime example
🧨 Hurry-up & Register today itself!🧨
Devops for Developers course (Live class ) 🔥🔥:
javatechie.ongraphy.com/cours...
COUPON CODE : NEW24
Spring boot microservice Premium course lunched with 70% off 🚀 🚀
COURSE LINK : Spring boot microservice course link :
javatechie.ongraphy.com/cours...
PROMO CODE : JAVATECHIE50
GitHub:
github.com/Java-Techie-jt/kaf...
Blogs:
/ javatechie4u
Facebook:
/ javatechie
Join this channel to get access to perks:
czcams.com/users/javatechiejoin
🔔 Guys, if you like this video, please do subscribe now and press the bell icon to not miss any update from Java Techie.
Disclaimer/Policy:
📄 Note: All uploaded content in this channel is mine and it's not copied from any community, you are free to use source code from the above-mentioned GitHub account. - Věda a technologie
i got interviewed today and DLT been asked, now my concept is absolutely clear, thanks for this amazing stuff 😍😍
Exceptional....eas never knowing about DLT ...... really made my day with feeling that i learned something new today....i anyways always keep watching many of ur posted vidoes.... thanks for ur efforts for sharing ur knowledge
Glad that it helps you. Keep learning 😃
Hello, please make video on Spring Boot Hexagonal Architecture, lot of company's are using as modern development, i struggle a lot still don't understand entire structure.
Okay sure i will do that
Another real time video from you sir. Thank so much sir for your hard works
Great work , exactly what i have been looking for , thanks a lot for the hard work in bringing this tutorial .
Great work sir. Thanks again
Appreciate your efforts Basant. God bless you❤😊every week waiting for new updates…
Excellent content... As always, thanks alot Sir.. ,👍🏻
Thanks a lot for these amazing tutorials! I learned a lot from your videos.
Thanks a lot on good work ! As usual this video is always informative and practical
Thanks a lot on good work !
Great thanks
Thanks a lot sir from bangalore ❤🙏
Do you have an explanation for publisher retries?
thank you sir for your clear explanation. I have one question here why we are creating multiple retry topics here although we already have DLT topic to track the failure message.
Can't we reuse the same topic for retry?
Yes we can override this behaviour but needs to check this configuration
Another scenario is: why not to wait for the external service to wake up so that we can resume the processing. This way we can avoid one drawback of the earlier approach. which is as follow:
for one entity we got error and we pushed it to dlt. but we got another message for the same entity and this was processed successfully. now when the dlt msgs will be processed, this will update the entity as per previous data which will create data inconsistency.
Waiting for the service to wake up will ensure two things.
1. safe guard the chronology of the events
2. no unnecessary consumption and retry and then publishing to dlt.
This is my observation. I would like to hear from you on this. Thank you sir.
Good observation and agree with you 🙂
Please continue the interview series. Waiting for so long @@Javatechie
I really appreciate your interest and I will continue buddy that I need enough time for presentation and pieces of code so please help me to help you out .
Hi Satya the data inconsistency scenario you are telling when consumer related resources are unavailable but I believe DLT error topics are usually helpful to investigate / analyse the root cause for failure messages like NPE, Array indexOutOfMemory etc..not for reprocessing the DLT messages again.
great
Hi Basanth, if possible can you please make a video on message delivery semantic like only once, atmost once, atleast once and how to avoid duplicate messages and consumer side if application is running on 2 to 3 pods. Thankyou!
It's a good suggestion thanks will plan it
tq bro for ur videos providing good knowledge to us and i have questions which aske in recent ineterview asked what are locks in spring ,where u have used singleton pattern in ur project, and idempotent and hope u will provide answer for this questions
All your doubts are already answered in the QA series video.
Hey guys, I need to implement a retries when producing to Kafka and its related tests. Do you have references to accomplish this?
I don't have video on it but the solution is straight forward you can use spring retry directly in your producer code
@@Javatechie thank you for answering. I had done multiple tries but I always struggle with the test classes. At the end I stayed with producer configuration retries suggested by Kafka but still got no lucky with tests
please continue springboot interview series add security related questions
Yes next weekend i will publish that
what happened to DLT-topic when exception record is recorded inside it ? Do programmer need to manually retry from this topic or it is taken care by Kafka?
@javatechie i have same question
No you need to create another publish method to check record from DLT and process it with existing flow . But before that you need to identify why its failed first time If it's steal data then you need to discard those failed events and re process others
Is the implementation and configuration same for Kafka producer ?
No for the producer it's different
@@Javatechie can u please suggest/advise me how to do for producer part ?
why you don't use kraft
Sir if my kafka is down if I pushed message to the consumer and using retrieval method i retry till 15mt and when kafka is start in between of 15mt so it will work?
Yes in the first attempt only the consumer listens because check in consumer properties we have defined fetch type earliest
Is there any retry in producer like comsumer?
@@tejastipre9787 hello yes we can implement spring retry in producer side as well
Thnq i did.
but now my problem is if i push message from producer and i will hold execution using debug and then when I shutdown kafka and realise the debug then i started kafka again so kafka producer try to push message continuously and when i started kafka message also produced but in this case consumer does not recived these messages and error comes in console.
pelase add this in a playlist
It is there in Kafka playlist