SQL Query Optimization. Why is it so hard to get right?

Sdílet
Vložit
  • čas přidán 27. 06. 2018
  • Slides, notes, and donations: www.BrentOzar.com/go/dewitt
  • Věda a technologie

Komentáře • 10

  • @DAWEAP1
    @DAWEAP1 Před 5 lety +2

    Such an informative video. Thank you!

  • @tileq
    @tileq Před 5 lety +5

    This is gold!

  • @brianmullins8444
    @brianmullins8444 Před 6 lety +2

    Wow, that was awesome!

  • @krneki6954
    @krneki6954 Před 5 lety +1

    excellent stuff! thx brent for uploading, keep it up! how can this only have 1,4k views???

  • @ArvindDevaraj1
    @ArvindDevaraj1 Před 4 lety +1

    Excellent lecture. 1:15:15 Prof. Jayant Haritsa is from IISc Bangalore

  • @KittenYour
    @KittenYour Před 3 lety +1

    Спасибо

  • @youtubeshort6469
    @youtubeshort6469 Před 6 lety +3

    👍

  • @professortrog7742
    @professortrog7742 Před 9 měsíci

    Good lecture, however the statement at 29:30 and again at 44:10 is blatantly incorrect, it DOES matter what your read and write speeds are of your IOs and how much buffers you have and not only in a relative sense, especially when making the hard choice between using a more IO heavy algorithm or a more memory intense algorithm.
    This is exactly why in for example PostgreSQL these numbers can be tweaked in the configuration.

  • @joechang8696
    @joechang8696 Před rokem

    can someone repost the gofundme? One issue in the SQL Server query optimizer is that it tries to find the lowest cost plan for a given parameter set, but does not consider if there is variation in the distribution. Suppose two plans have only a slight difference in cost, but the lower cost plan will go bad if a row estimate is off, while the higher cost plan is more resilient to variation.
    Example, a table that is estimated to have zero or 1 row at most joined to a large table with no suitable index. The large table must be accessed with a scan.
    The lowest cost plan is accessed the 0/1 row table Nested Loops Join to the largest table, this being ever slightly lower cost than first table Hash Join the large table.
    But now if the first table actually has 2 or more rows, not the estimated 0 or 1, you are screwed