7 Reasons why your microservices should use Event Sourcing & CQRS - Hugh McKee

Sdílet
Vložit
  • čas přidán 6. 09. 2024
  • Event Sourcing & CQRS offers a compelling and often controversial alternative for persisting data in microservice systems environments. This alternate approach is new for most of us, and it is justified to have a healthy level of skepticism towards any shiny new and often over-hyped solution. However, what is interesting is that this is so new that even the champions and evangelists often overlook the real benefits provided by this new way of capturing and storing data.
    In this talk, we will look at 7 of the top reasons for using Event Sourcing & CQRS. These reasons covered go beyond the often referenced benefits, such as event stores are natural audit logs, or offering the ability to go back in history to replay past events. The primary goal of this talk is to flip your view from limited to no use of ES & CQRS to an alternate perspective of what you give up when you elect to not use it as the go-to persistence strategy.
    Slides and more information: www.reactivesu...

Komentáře • 23

  • @enquiresandeep
    @enquiresandeep Před 5 lety +5

    Perhaps The best explanation for CQRS and Event Sourcing

  • @ihatesleep
    @ihatesleep Před 5 lety +2

    Fantastic talk! I’m really enjoying these patterns and how they can help calm the paranoia of failure 😜

  • @JonathanYee
    @JonathanYee Před rokem

    My take is just on his example of services relying on each other to complete a workflow (order + credit service). The thing he's missing is to use a saga pattern. And if they did their DDD well they could have realise some of them could have been in the same bounded context. eventing in microservice should be a broadcast and forget. Once we have the notion of state 1 > state 2 > state 3, you need another strategy to manage that.

  • @shenth27
    @shenth27 Před 4 lety +2

    Excellent explanation of microservice and event sourcing

    • @ReactiveSummit
      @ReactiveSummit  Před 3 lety

      We are glad you found this useful. You might enjoy Hugh's latest video. It's on the Reactive Summit 2020 playlist.

  • @xprt642
    @xprt642 Před 5 lety +5

    A great talk, as usual.

  • @anatoliy.t
    @anatoliy.t Před 3 lety

    What a great talk!

  • @anuragbhatt2547
    @anuragbhatt2547 Před 4 lety

    My question is on point 6 about de coupling. What would we need to do in case if we have change database on the microservice. That means we have to also change all the calling database also?

    • @singhpratyush_
      @singhpratyush_ Před 3 lety

      I think your query is about patterns around event schema versioning. Give it a read around the Internet, it is a trivial problem with this system. People have written books on it.

  • @cazino4
    @cazino4 Před 5 lety +7

    @36:52 Yes ,but the problem is where EVERY service has their OWN view of Customer and continues to conduct their respective business on that basis then, at some point later, when things become "eventually consistent", the dependent services may have made logical errors in their processing as a result of only having access to STALE data at the time. Whilst this flaw may not be apparent with a fairly static entity like Customer, for services in an eCommerce system like Sales and Orders this approach will likely VERY quickly become problematic.
    Whilst there are obvious benefits to this approach, as clearly elucidated by the speaker, there are ALWAYS trade-offs; the "best" architectural solution will thus be some function of the task at hand and associated use-cases, suitability must,therefore, be accessed on a case by case basis....

    • @Oswee
      @Oswee Před 5 lety +3

      But, isn't that a normal that in Sales domain a Customer has one meaning and in Accounting domain a Customer has totally different meaning? I think this is how things in real world works. The same for Orders. In context of Sales dep. it has one meaning and in context of Warehouse it has another meaning. And protobufs as well is a good option to establish contracts between different services.

    • @cazino4
      @cazino4 Před 5 lety +2

      ​ Dzintars Klavins Yeah.... you've COMPLETELY missed the point and further what you're asserting (i.e. specific business terms having contextually sensitive meanings ) is at best: entirely inconsequential and at worst: tantamount to a straw-man argument. That said, I recognise that perhaps you ,may well have, GENUINELY failed to understand the point(s) raised in my preceding message and thus i'll attempt clarify a few things here if I may:
      1)The general issue of data inconsistency (and the logical errors that may be introduced into a system as a result) is COMPUTER SYSTEM SPECIFIC and actually has very little analogues in the: "real world" as you put it. Well....at least when viewed through a classical (i.e. non quantum) lens and this is principally because the state we ascribe to ANY noun is, by all accounts, binary in nature.... that is an object either has a particular attribute or it doesn't.... there generally should exist no frame of reference where BOTH SIMULTANEOUSLY hold true in a specific instance. This is however NOT true for the real world objects we attempt to model in a computer system and in fact great care has to be taken by the programmer to ensure that a CONSISTENT VIEW of model data is available to a computer system to ensure that associated data and indeed the entire system is not corrupted going forward. This is specifically why most programming language abstractions (Java, C++,GO) provide integrated concurrency primitives (e.g. locks, semaphores, synchronous contexts,etc)
      2)Please note that the e-commerce model cited in my preceding comment was selected on an entirely arbitrary basis purely for the purposes or more clearly illustrating the point but note that ANY real world object we attempt to model in a computer system will be subject to the same data inconsistencies unless great care is taken; data consistency, especially as a system begins to scale, is a NON TRIVIAL concern.
      3)Perhaps now, having regard to the full details of data consistency issues specifically as they relate to computer systems, you can appreciate the point I was attempting to make in my prior comment....put simply: if MULTIPLE VERSIONS of a SPECIFIC INSTANCE of a model exists in a computer system, each with their own respective states then this may introduce a huge amount of, difficult to debug, unreproducible, errors if care is not taken especially when the data is very dynamic in nature (i.e very likely to change between one instance of the application to the next) So the POINT I was making was that a Customer/User model is typically static in nature in most systems.... that is, once a customer has registered, their details tend to remain consistent and thus an "eventually consistent" pattern of execution should not present a problem here. However when you contrast this with say the model of a commodity in a trading system (a specific Currency in a Forex (Foreign Exchange) system for example) where there is very likely to be sub-microsecond fluctuations in the pricing data of such a model, I would ABSOLUTELY NOT be comfortable in allowing trades to execute on Index data that may well be stale ; clearly a "strongly consistent" model would be suitable here. Again it comes down to picking the right architecture for the task at hand which may vary greatly, sometimes within the SAME application.

    • @Oswee
      @Oswee Před 5 lety +2

      @@cazino4 Thank you for spending your time and explaining. I am totally not an expert. I am just simple man interested in this topic. Anyway... i kinda understand what challenges you are talking about.
      I just got probably dumb idea... what if i can implement some kind push based temporary event store and when Kafka consumer consumes a new command/event it first informs some pull based event store, that "i will process this message" and this event store could notify all other consumers of the same consumer group, that this is the last message been processed, and this is current message in process. After message is processed, consumer again can inform "I'm done with this message" and then retention process comes in the play. So.. if in the middle of nowhere consumer dies, we have some additional partie which knows about last messages processed and messages which hangs in "processing" stage so we can react on this.
      This something like kids inform their parents that they are going to some party. If they does not come back in time, parent start to look up for them. They don't know what exactly happened, but they know, where to start issue handling.
      Don't blame me hard. :)

    • @thatpaulschofield
      @thatpaulschofield Před 4 lety +2

      @cazino4, can you give an example of the same data being used to enforce business rules in both the Sales and Accounting domains, and what the business cost would be if one of the domains was out of sync by (for example) a couple of seconds?

    • @chetanhanda
      @chetanhanda Před 4 lety +1

      @@cazino4 Its common sense , there is nothing wrong with that the speaker is saying, you missed the point totally.
      For static data - allow the data to be replicated, for dynamic data don't use stale replica/snapshot data.
      The speaker is proposing a concept which says "it's ok nowadays to keep a copy of some data as storage is cheap".
      It's expected people have common sense to not design biz logic on data which cannot be stale as in your example of Forex.
      If the phone number of a customer is out of sync for 3 seconds its ok - we can still continue to function by allowing a copy of the data since storage is cheap.. is the point being made here.
      You missed the point , going on and on about something which is basic common sense.