Out of disk | Postgres.FM 106 |

Sdílet
Vložit
  • čas přidán 5. 09. 2024
  • [ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT+manually! You can also try CZcams's auto-translation of them from English to your language; try it and share it with people interested in Postgres!]
    Nikolay and Michael discuss Postgres running out of disk space - including what happens, what can cause it, how to recover, and most importantly, how to prevent it from happening in the first place.
    Here are some links to things they mentioned:
    * Disk Full (docs) www.postgresql...
    * pgcompacttable github.com/dat...
    * Our episode on massive deletes postgres.fm/ep...
    * Getting Rid of Data (slides from VLDB 2019 keynote by Tova Milo)
    * pg_tier github.com/tem...
    * Data tiering in Timescale Cloud docs.timescale...
    * Postgres is Out of Disk and How to Recover (blog post by Elizabeth Christensen) www.crunchydat...
    * max_slot_wal_keep_size www.postgresql...
    * Our episode on checkpoint tuning postgres.fm/ep...
    * Aiven docs on full disk issues aiven.io/docs/...
    ~~~
    What did you like or not like? What should we discuss next time? Let us know in the comments, or by tweeting us on @postgresfm / postgresfm , @samokhvalov / samokhvalov and @michristofides / michristofides
    ~~~
    Postgres FM is produced by:
    - Nikolay Samokhvalov, founder of Postgres.ai postgres.ai/
    - Michael Christofides, founder of pgMustard pgmustard.com/
    ~~~
    This is the video version. Check out postgres.fm to subscribe to the audio-only version, to see the transcript, guest profiles, and more.

Komentáře • 4

  • @jocketf3083
    @jocketf3083 Před měsícem +1

    Thanks for another great episode!
    We once ran out of space after a small application change. Because of (...) reasons we needed to have our temp storage limit set high. The application change altered a query in a way where it took a very long time to finish. The query slowly consumed temp storage space as it went along. Since the application kept kicking off new instances of that query we ran out of space pretty fast!
    Captain Hindsight has a few lessons for us there, but at least the fix was easy. To be safe, we failed over to a standby replica and set the application's account to NOLOGIN. Once the application deployment had been rolled back we unbanned the account. We then took our time to clone the database to our old primary and let it rejoin our Pgpool load balancer as a replica.

  • @kirkwolak6735
    @kirkwolak6735 Před měsícem +2

    Great Stuff as usual. I believe everyone should have something monitoring disk free space, and alerting at some level of low disk, extra early. Would have been nice to hear what "formulas" you guys tend to use. Like at least 3x Daily WAL Max, or some such.
    We've hit this in the past with another vendor. Because someone left Tracing on. And a TON of logfiles were being produced that filled that disk...

  • @michaelbanck367
    @michaelbanck367 Před měsícem +1

    Reorg only needs the amount of active data of the table in addition temporarily, so if 90% got deleted or Bloat is 600% or more, then the additional disk space is not 2x but 10-30%