Efficient Time Series with PostgreSQL - Steve Simpson

Sdílet
Vložit
  • čas přidán 23. 08. 2024

Komentáře • 15

  • @howardmarles2576
    @howardmarles2576 Před 4 lety +6

    Please don't be put off by a slowish start. Great talk !! Stick with it until the end. Very well worth watching !!

  • @imanahmadvand8788
    @imanahmadvand8788 Před 2 lety

    The most interesting part about this talk was not just about tweaking the database with partitioning tricks or other time-oriented extensions, it was actually focusing on normalization and tweaking composite indexes to keep the work on the logarithmic lane, but missing the query plan examination!

  • @howardmarles2576
    @howardmarles2576 Před 4 lety

    Thank you Steve Simpson

  • @StevenvanderMerweProfile
    @StevenvanderMerweProfile Před 6 lety +5

    Great talk on a complex subject, Thanks. Did you consider using TimeScale (a PostgreSQL extension) as well

    • @ariuszynski
      @ariuszynski Před 4 lety

      I'm using TimescaleDB and I really like it.

  • @Miggleness
    @Miggleness Před 6 lety

    Good talk indeed. I really need to test if postgres can run that fast. I do wish we got some stats on impact of normalisation, index, and triggers on insert

  • @EllisWhitehead
    @EllisWhitehead Před 6 lety

    Very nice talk. Thanks!

  • @zacharythatcher7328
    @zacharythatcher7328 Před 2 lety

    Is there any way to ensure that the WHERE clause gets evaluated before the JOIN? Leaving it to postgres to sort this out seems like a very risky trial/error design, that could get broken with an unexpected postgres update.

    • @zacharythatcher7328
      @zacharythatcher7328 Před 2 lety

      I think I can answer my own question. You would subquery the metric table with the latter two WHERE clauses and then pass the ID as an IN clause to the query on the value table so that you look at significantly less values. If you want to be even more specific you could make that a subquery to your range clause as well. This seems more stable to me, but maybe creating too many subqueries like this cuts down on postgres' ability to parallelize operations.
      Can someone provide some detail as to why the presenter chose to leave the query so imperative?

  • @TomVectorG2
    @TomVectorG2 Před 5 lety

    Hi, one question. Grafana have some dificult to read data by SQL query on DATETIME format? Using type Timestamp worked the consult, but using datetime not. The Graphic showed hour wrong.
    I'm very grateful if anyone can help me.
    Obs.> I need read one table where have on Datetime formate.

  • @supercompooper
    @supercompooper Před 6 lety

    You should check out axibase! It's pretty excellent!

  • @MrAtomUniverse
    @MrAtomUniverse Před 3 lety

    Why dont you just use timescaledb

    • @helloworld7313
      @helloworld7313 Před 2 lety

      the presenter mentioned the reason is to reduce operational burden: you can use a single db technology for the whole system.
      also from the video it looks like PostgreSQL can scale well with large query and dataset pretty well.
      i think 1 scenario that this approach won't fit it if they want much higher write throughput, which LSM based db is definitely better on.

  • @inraid
    @inraid Před rokem

    Rambling and inarticulate