They Enabled Postgres Partitioning and their Backend fell apart

Sdílet
Vložit
  • čas přidán 2. 05. 2024
  • In a wonderful blog, Kyle explores the pains he faced managing a Postgres instance for a startup he works for and how enabling partitioning sigintfically created wait events causing the backend and subsequently NGINX to through 500 errors.
    We discuss this in this video/podcast
    www.kylehailey.com/post/postg...
    0:00 Intro
    3:30 Background
    6:30 April 1st The partitioning
    8:30 May 6th things went south
    10:30 April 11 The start of the waits
    12:00 Locks
    15:30 High volume of locks
    17:30 The Problem with Postgres 13
    20:00 Fastpath locking
    24:00 Moving to Weekly Partitions
    31:30 Summary
    Discovering Backend Bottlenecks: Unlocking Peak Performance
    performance.husseinnasser.com
    Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon)
    backend.husseinnasser.com
    Fundamentals of Networking for Effective Backends udemy course (link redirects to udemy with coupon)
    network.husseinnasser.com
    Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon)
    database.husseinnasser.com
    Follow me on Medium
    / membership
    Introduction to NGINX (link redirects to udemy with coupon)
    nginx.husseinnasser.com
    Python on the Backend (link redirects to udemy with coupon)
    python.husseinnasser.com
    Become a Member on CZcams
    / @hnasr
    Buy me a coffee if you liked this
    www.buymeacoffee.com/hnasr
    Arabic Software Engineering Channel
    / @husseinnasser
    🔥 Members Only Content
    • Members-only videos
    🏭 Backend Engineering Videos in Order
    backend.husseinnasser.com
    💾 Database Engineering Videos
    • Database Engineering
    🎙️Listen to the Backend Engineering Podcast
    husseinnasser.com/podcast
    Gears and tools used on the Channel (affiliates)
    🖼️ Slides and Thumbnail Design
    Canva
    partner.canva.com/c/2766475/6...
    Stay Awesome,
    Hussein
  • Věda a technologie

Komentáře • 49

  • @hnasr
    @hnasr  Před 10 měsíci +1

    Check out my backend performance course performance.husseinnasser.com

  • @buddy.abc123
    @buddy.abc123 Před 10 měsíci +80

    As we become more and more senior we realise more and more that every decision in our field is basically a compromise. Nothing is perfect. There is no magic. There is a cost.

    • @ZeeshanAli-nk3xk
      @ZeeshanAli-nk3xk Před 10 měsíci

      deep!

    • @just_A_doctor
      @just_A_doctor Před 10 měsíci +1

      We become!?!
      Senior never said he is senior like that.

    • @EzequielRegaldo
      @EzequielRegaldo Před 10 měsíci

      Dude that problem is super jr

    • @arcanernz
      @arcanernz Před 10 měsíci +1

      @@EzequielRegaldoagreed this bug negates partitions altogether like who thought it was ok to lock all partitions.

    • @Atlastheyote222
      @Atlastheyote222 Před 9 měsíci

      Sums up 99% of cloud development tbh

  • @kushalkamra3803
    @kushalkamra3803 Před 10 měsíci +5

    Thank you 🙏. I was wondering why did Postgres team did not anticipate locking issues while scaling in a feature that will be used for scaling.

  • @haythamasalama0
    @haythamasalama0 Před 10 měsíci +2

    Great video ❤

  • @sujeetagrahari2292
    @sujeetagrahari2292 Před 10 měsíci +2

    You said if the query is to get the data of the spcific day, even it will scan the whole partitions. But aws docs provided in the blog suggest, it will read the many partitions if you want the data from multiple days.
    "1. You query many days worth of data, which requires the database to read many partitions.
    2. The database creates a lock entry for each partition. If partition indexes are part of the optimizer access path, the database creates a lock entry for them, too.
    3. When the number of requested locks entries for the same backend process is higher than 16, which is the value of FP_LOCK_SLOTS_PER_BACKEND, the lock manager uses the non-fast path lock method."

    • @LeoLeo-nx5gi
      @LeoLeo-nx5gi Před 10 měsíci

      can you please provide the link to AWS blog, thanks !!

  • @muayyadalsadi
    @muayyadalsadi Před 2 hodinami

    16:58 via part_man or pg_cron extension

  • @engineerscodes
    @engineerscodes Před 10 měsíci +1

    Nice video 👌

  • @pollathajeeva23
    @pollathajeeva23 Před 10 měsíci

    Interesting case study I'm just curious that how Distributed SQL which is Postgres compatible mitigate this issue in distributed system like YugaByte and CRDB.

  • @Atlastheyote222
    @Atlastheyote222 Před 9 měsíci

    I was considering doing this for my PVE cluster of development environments to simplify docker swarm deployment. Maybe it isn’t worth the trouble lol

  • @lakhveerchahal
    @lakhveerchahal Před 10 měsíci +2

    Hi Hussein, at 18:10 what was the query that you ran on pg13?
    I tried reproducing it, but it is locking only that specific partition.
    Here's what I did -
    Select * from orders where order_date = '2020-01-01';

  • @diamondkingdiamond6289
    @diamondkingdiamond6289 Před 10 měsíci +6

    Also, don’t query times drastically increase the more partitions you have. E.g. if you have 1000 partitions query times will be extremely slow.

    • @pajeetsingh
      @pajeetsingh Před 10 měsíci

      Source it plox

    • @noir5820
      @noir5820 Před 10 měsíci +7

      Depends are you querying multiple partitions or not ? The whole point of partitioning is to speed up queries

    • @jacob_90s
      @jacob_90s Před 9 měsíci

      It adds a little more over head but lets you scale farther. As with everything, all about pros and cons

    • @codingjerk
      @codingjerk Před 4 měsíci

      Planing will slow down, yes, but 1000 partitions is not a huge number, I used to have 4096 partitions on one huge table and it worked like charm. Plans for some analytical queries were slowed down tho, to like ~100ms overhead, but it's okay for analytical queries anyway.

  • @pajeetsingh
    @pajeetsingh Před 10 měsíci +4

    When they are working on database for their startup; Why do they always start with one big table?
    As part of design and expecting 10 mils row per day; the initial design could have considered partitioning.
    Is the industry standard is to start with novice design and fix-as-the-bugs/requirements-come-by?

    • @martinvuyk5326
      @martinvuyk5326 Před 10 měsíci +1

      Yep, that's exactly why NoSQL grew in popularity. FrontEnd and BackEnd don't wanna think about db

    • @mishikookropiridze
      @mishikookropiridze Před 8 měsíci +1

      You won't know that you will have 10 mil rows per day until you have it.

    • @DoubleM55
      @DoubleM55 Před 2 měsíci

      It would be overkill to always engineer everything to support billions of rows, when probably it won't be needed.
      As your data starts to grow, you figure out a way to move forward.
      Works fine in most cases, unless you hit the wall on some weird behavior like shown in this video.

  • @pallavSemwal
    @pallavSemwal Před 10 měsíci +1

    could you please do a course on devOps ??

  • @samirzerrouki3153
    @samirzerrouki3153 Před 10 měsíci

    Hello hussein;
    Can you please talk about the action took by twitter to prevent scraping (6000post/day). And can we hear your thoughts about this and if there any alternative solutions to achieve the goal.
    Thank you :))

  • @ifyugwumba8120
    @ifyugwumba8120 Před 10 měsíci

    How do you do it sir just after creating a course on udemy you are back on CZcams. Show me thy ways
    Am not even done with your udemy course

  • @mawesome4ever
    @mawesome4ever Před 9 měsíci

    This is interesting! Do you know if this has been fixed in the newer versions of postgres? I'm looking for a DB for my game and i think i'll eventually scale and don't want to run into this same issue

    • @rayanfarhat5006
      @rayanfarhat5006 Před 9 měsíci +2

      Every DB has it's own problems when scaling, if your game need scaling, then you have enough money to hire a db expert to fix your scaling issues

  • @BryanChance
    @BryanChance Před 10 měsíci +2

    Partitioning is too scary for me unless the data is non-critical and some data loss is acceptable. Often the configuration is too complex and brittle that it becomes a bigger problem than what you're to solve. LOL ooh gives me nightmares..

  • @GebzNotJebz
    @GebzNotJebz Před 10 měsíci +2

    يعطيك العافية يا هندسة.. محتوي رائع

  • @rmacpie3475
    @rmacpie3475 Před 9 měsíci

    Hi,
    My IP is dynamic and it's uses intrrnal wan services Any solutions for port forwarding? I use UPnP port forwarding no use, Try all solutions please help

  • @pajeetsingh
    @pajeetsingh Před 10 měsíci

    As of latest postgres, does it really create a process per SELECT operation?
    That's crazy thinking.

    • @DoubleM55
      @DoubleM55 Před 2 měsíci

      No, It never did. It creates a process per *connection*. And then one connection can perform many queries.

  • @AkashBanik-sf2dw
    @AkashBanik-sf2dw Před 9 měsíci

    What happened to your discord server??

  • @rydmerlin
    @rydmerlin Před 10 měsíci

    Unlike Oracle it won’t automatically create the partition upon insert. In my last assignment I chose to partition with a 7 day interval and the table had billions of rows over a 10 year period. The table was also subpartitioned. This DBA looks like he had experience with Oracle. What authority could he had used to verify his plan before he took action?

    • @dinoscheidt
      @dinoscheidt Před 8 měsíci

      Well, this kind of smoke testing is a case where human-machine collaboration (i.e. talking it through with the next version of chat gpt) might be a valid path for the future. Not because the senior DBA doesn’t know what he is doing, and he still needs to know what he is doing, but because an llm is great to potentially filter down a bunch of documentation, content, cases studies and issues one could never have read all and pattern match them. Otherwise no fault - its just life if one can’t afford to do an infinite amount of research and never get to action.

  • @bilincinontolojikizdirabi
    @bilincinontolojikizdirabi Před 8 měsíci

    why do they need to use transactions on this huge table? Isn't using data marts or DWHs are more effective? This tablo should not be a transactional table i believe, especially with 22 indexes???

  • @mhcbon4606
    @mhcbon4606 Před 10 měsíci

    i dont have a wide picture on the industry, my gut feeling and small experience tells me that very often in system design people relies on auto partitioning systems to "make it happen". they are confusing quantitative distribution and qualitative distribution. Partitioning the data in space is one thing, Partitioning the data by access and usage is something else. anyways, not my concern anymore.

  • @jp-wi8xr
    @jp-wi8xr Před 10 měsíci

    "Disney magic genie" made me realize, you look like Aladdin.

  • @robarnold8377
    @robarnold8377 Před 9 měsíci

    Great video - this will sound odd but your accent is so Scottish in tone. I promise it’s not just because lock was said 100 times.

    • @robarnold8377
      @robarnold8377 Před 9 měsíci

      Also I never experienced this issue in oracle. Balancing global vs local partitions was important - but simple oracle partitions definitely felt more reliable.

    • @robarnold8377
      @robarnold8377 Před 9 měsíci

      All the partitions wtaf! I just got to that part of the video. Hell
      That’s a bug. That negates the usage of partitions on ANYTHING that gets an update operation
      Maybe Postgres communicated this out in advance. But that does not seem like a good start of a partition use case

  • @yassineoujaa2670
    @yassineoujaa2670 Před 9 měsíci

    Recently I asked the AI to generate random arabic names , and the one of the names was Nasser Hussein lol

  • @jondoe79
    @jondoe79 Před 10 měsíci +1

    Only Partition for data to be archived.

  • @pajeetsingh
    @pajeetsingh Před 10 měsíci

    19:14 S you don't know and we have liberty to disagree with you?

  • @mrngwozdz
    @mrngwozdz Před 7 měsíci

    You can do partitioning by date by adding additional table and rule on it. Here is how to achievie this:
    CREATE RULE autocall_createpartitionifnotexists AS ON INSERT TO public.data_temp DO INSTEAD (" +
    "SELECT createpartitionifnotexists((new.value)::date) AS createpartitionifnotexists;" +
    "INSERT INTO public.data (value)" +
    "VALUES (NEW.value);" +
    ")
    And function createpartitionifnotexists:
    CREATE OR REPLACE FUNCTION public.createpartitionifnotexists(fordate date)
    RETURNS void
    AS '
    declare dayStart date := date_trunc(''day'', forDate);
    declare dayEndExclusive date := dayStart + interval ''1 day'';
    declare tableName text := ''public.data_'' || to_char(forDate, ''YYYYmmdd'');
    begin
    if to_regclass(tableName) is null then
    execute format(''create table %I partition of public.data for values from (%L) to (%L)'', tableName, dayStart, dayEndExclusive);
    execute format(''create unique index on %I (id)'', tableName);
    execute format(''create index on %I USING btree (value)'', tableName);
    end if;
    end;'
    LANGUAGE plpgsql;