"Transactions: myths, surprises and opportunities" by Martin Kleppmann

Sdílet
Vložit
  • čas přidán 26. 09. 2015
  • Back in the 1970s, the earliest databases had transactions. Then NoSQL abolished them. And now, perhaps, they are making a comeback... but reinvented.
    The purpose of transactions is to make application code simpler, by reducing the amount of failure handling you need to do yourself. However, they have also gained a reputation for being slow and unscalable. With the traditional implementation of serializability (2-phase locking), that reputation was somewhat deserved.
    In the last few years, there has been a resurgence of interest in transaction algorithms that perform well and scale well. This talk answers some of the biggest questions about the bright new landscape of transactions:
    What does ACID actually mean? What race conditions can you get with weak isolation (such as "read committed" and "repeatable read"), and how does this affect your application?
    What are the strongest guarantees we can achieve, while maintaining high availability and high performance at scale?
    How do the new generation of algorithms for distributed, highly-available transactions work?
    Linearizability, session guarantees, "consistency" and the much-misunderstood CAP theorem -- what's really going on here?
    When you move beyond a single database, e.g. doing stream processing, what are your options for maintaining transactional guarantees?
    Martin Kleppmann
    @martinkl
    Martin Kleppmann is a software engineer and entrepreneur, and author of the O'Reilly book Designing Data-Intensive Applications (dataintensive.net), which analyses the data infrastructure and architecture used by internet companies. He previously co-founded a startup, Rapportive, which was acquired by LinkedIn in 2012. He is a committer on Apache Samza, and his technical blog is at martin.kleppmann.com.
  • Věda a technologie

Komentáře • 27

  • @ruixue6955
    @ruixue6955 Před 3 lety +48

    3:40 Durability
    4:28 Consistency
    4:36 != C in CAP theorem
    5:08
    5:27 associated as the transactions the application executed in the database, move the database one consistent state to another
    6:06
    6:21 Atomicity
    6:55 fault handling
    9:54 Isolation
    11:26 Question: *repeatable read* VS *read committed*
    12:00 explain by example
    13:02 read committed - 13:18
    14:57 *read skew* (can occur under *read committed*)
    15:06 assumption: 2 accounts: x & y
    15:33 consider: you have concurrently running a read-only transaction ( *backup process* or *analytic query* )
    16:30 problem for the *backup* : you've seen different part of databases at different point in time
    16:39 can happen under *read committed*
    16:48 *repeatable read* and *snapshot isolation* to prevent *read skew*
    17:25 more common: *snapshot isolation*
    18:36 example of *write skew* - 18:45 INVARIANT: at least 1 doctor is on-call
    19:42 assumption on data in database
    20:15 result in violation on INVARIANT: there has to be doctors on-call
    20:33 in *Oracle* this can not be prevented unless

  • @aminigam
    @aminigam Před rokem +6

    Brilliant enlightening session, a gem. Listening to Martin is a pleasure

  • @valentinwaeselynck8124
    @valentinwaeselynck8124 Před 8 lety +65

    "Every sufficiently large deployment of microservices contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half transactions" :D

    • @implemented2
      @implemented2 Před 4 lety

      What is half transactions?

    • @SimplyaGameAholic
      @SimplyaGameAholic Před 3 lety +2

      @@implemented2 I think he meant the transaction contains partial failures with no reversibility. Basically you have no atomicity, and you're leaving things messed up after any failures :)

  • @arunsatyarth9097
    @arunsatyarth9097 Před 3 lety +2

    Listening to Martin Kleppmann is like an orchestra. You enjoy it to the very limit!

  • @Peeja
    @Peeja Před 8 lety +35

    Fantastic talk!
    It's worth noting, spacetime itself obeys the same upper bound on consistency without coordination: causality.

  • @michaeleaster1815
    @michaeleaster1815 Před 8 lety +8

    Thanks! I enjoyed this talk very much, esp. the sanity check of "who can describe read committed vs repeatable reads, from memory?".

  • @ChumX100
    @ChumX100 Před 3 lety

    Absolutely brilliant! Very entertaining as well.

  • @ykochubeev
    @ykochubeev Před 5 lety

    Thank you so much! Very interesting problem highlighting!

  • @a3090102735
    @a3090102735 Před 4 lety +2

    This is a great talk! I'm also reading your books, the transactions section, however, listening to your explanations makes more sense to me now!

  • @PaulSladekb
    @PaulSladekb Před 8 lety

    great talk!

  • @barcelona.netcore4191
    @barcelona.netcore4191 Před 3 lety

    Brilliant!

  • @sabirove
    @sabirove Před 6 lety

    brilliant! ty!

  • @BillBurcham
    @BillBurcham Před 8 lety +5

    At 34:35 czcams.com/video/5ZjhNTM8XU8/video.htmlm35s "Every sufficiently large deployment of microservices contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of transactions"

  • @gurjarc1
    @gurjarc1 Před 2 lety +3

    i have a doubt on 2 phase locking at 23:00 in the video
    say you have 2 txns t1 and t2 that executes the unit of read first, check some condition and update
    secnario1 - t1 has got exclusive lock on the row (for write), before t2 can get shared lock to read. So read of t2 has to wait for t1 to commit. So we get consistency.
    scenario2 - say both t1 and t2 both executed the read part, but not yet executed the modify part, so both got shared lock that read all the doctors where oncall=true.
    Now neither t1 nor t2 can commit, because t1 cant write as t2 is holding shared lock and vice versa. so it is a deadlock in this scenario
    Can anyone confirm that scenario1 was a case where in 2pl successfully was able to serialize and scenario2, the timing was bad that it resulted in a deadlock which required the db system to interfere and victimize one of the two txns.
    Thanks for helping a guy in advance trying to understand these concepts

  • @ruixue6955
    @ruixue6955 Před 3 lety +1

    21:53 how to implement serializability
    22:08 two-phase lock

  • @gijduvon6379
    @gijduvon6379 Před 2 lety +3

    Это просто ахуительный видос!

  • @samlaf92
    @samlaf92 Před 6 měsíci

    At 14:33 does read committed implementation with row-locking lead to deadlock here?

  • @darshanime
    @darshanime Před 2 lety

    at 14:34, how does read commited prevent the inconsistency? isn't it that transaction serializability prevents it?

  • @DanHaiduc
    @DanHaiduc Před 4 lety +2

    Consensus is indeed expensive; blockchains are proof of that. Cryptocurrencies' transaction rate are limited either artificially, or by the fastest (single) node's processing power. For anything faster, you'd have to do sharding, which sacrifices consensus.

  • @IanKjos
    @IanKjos Před rokem

    Never did get to that explanation of "repeatable read"...

    • @bigphatkdawg
      @bigphatkdawg Před rokem

      I think it was implied: Not susceptible to read skew

  • @valtih1978
    @valtih1978 Před 7 lety

    Which `read skew` is he talking about? Read Committed means that lock is take for the duration of select statement. This read lock should prevent any commit during the 'backup process'.

    • @stIncMale
      @stIncMale Před 6 lety +1

      Probably the best way to get an answer to your question is by reading the remarkable article called "A Critique of ANSI SQL Isolation Levels" (just google it).

    • @implemented2
      @implemented2 Před 4 lety +3

      You can have a long running read transaction, for instance, making a dump. Locking the whole database for writes for the duration of dumping is not possible.