How to Read Spark Query Plans | Rock the JVM

Sdílet
Vložit
  • čas přidán 7. 09. 2024

Komentáře • 43

  • @vash47
    @vash47 Před měsícem +1

    Great video. I feel more comfortable now reading query plans. It's still tough!

  • @kapilsharma20
    @kapilsharma20 Před 2 měsíci

    @Daniel, an absolutely genuine content and in a very straight forward and step by step explanation, Thank you so much for this.

  • @gilesbbb
    @gilesbbb Před 8 měsíci +2

    This material hits the sweet spot for me. Thank you.

  • @atech_guide
    @atech_guide Před 4 lety +7

    Great Explanation Daniel :)
    I have a request for a similar video on Spark UI Please

  • @ignaciomonetto6246
    @ignaciomonetto6246 Před rokem +1

    You make us see the things simpler. Great job!

  • @sumanto_pal
    @sumanto_pal Před rokem +1

    best video on spark ever

  • @svapneel1486
    @svapneel1486 Před 4 lety +4

    Great Video. Explaining complex concepts so simply. Fantastic!!

  • @slipcatchsnatcher
    @slipcatchsnatcher Před 4 měsíci +1

    mindblown! great content, and indepth explanation!

  • @doshimihir1
    @doshimihir1 Před rokem +1

    Awesome videos.. looking forward to the next one..

  • @ladakshay
    @ladakshay Před 3 lety +1

    Superb Video! Query plans looked difficult to me before, but you made it simple. Thanks!!!

    • @rockthejvm
      @rockthejvm  Před 3 lety

      That's my goal - share it with other people that might find it useful!

  • @Jay-vh3gx
    @Jay-vh3gx Před rokem +1

    Thank you for clear cut explanation ☺️

  • @shyamsundarrajasekaran8392

    Subscribed😍 Wishing you all the best nice explanation daniel

  • @vinitsunita
    @vinitsunita Před rokem +1

    Awesome video

  • @cajaykiran
    @cajaykiran Před rokem +1

    Thanks for the quality content, very very helpful 👌

  • @urxword
    @urxword Před 2 lety +1

    You are so helpful, thank you so much for the explanation!

  • @prashantdhaundiyal2209
    @prashantdhaundiyal2209 Před 4 lety +1

    Well done daniel. It was nice to know explain.

  • @rahulberry4806
    @rahulberry4806 Před 3 lety +1

    nice and crisp explanation

  • @technologyexplorer9193

    Precise explanation.
    Can you also please tell how to make a full outer join in NON-ANSI way.

  • @sansanjusingh
    @sansanjusingh Před 2 lety +1

    good one

  • @learn_technology_with_panda

    Nicely explained

  • @dataarchitect
    @dataarchitect Před 3 lety +1

    A very good explaination !

  • @ethanquenum4778
    @ethanquenum4778 Před rokem +1

    Just amazing! Keep it up :)

  • @chandanprakash15
    @chandanprakash15 Před 4 lety +1

    very nicely explained, great video

  • @murthyspec2002
    @murthyspec2002 Před 2 lety

    wonderful explanation!!

  • @mootazmansouri7880
    @mootazmansouri7880 Před 3 lety +1

    awesome videos !
    keep them going

  • @user-zb1op8yl2x
    @user-zb1op8yl2x Před rokem +1

    Awesome

  • @freetrainingzone7867
    @freetrainingzone7867 Před 3 lety +1

    Very Nice explanation Daniel. Is it possible to walk us through a real world use-case end to end using SPARK SQL with mild complexity.

  • @ashishlimaye2408
    @ashishlimaye2408 Před 3 lety +1

    Great Video!

  • @marclamblin6211
    @marclamblin6211 Před 2 lety +1

    @Daniel FYI In the description Link "Written form to keep for later" is broken

    • @rockthejvm
      @rockthejvm  Před 2 lety +2

      Fixed - there was a / at the end dammit

  • @Capt_X
    @Capt_X Před 3 lety

    Great Video with nice explanation! Is there any list which help us to identify which exchanges performed by spark and by the codebase? Is it available on spark documentation link?

    • @rockthejvm
      @rockthejvm  Před 3 lety +2

      Spark docs usually mention which transformations involve a shuffle. Major examples include joining, sorting and grouping.

  • @rishigc
    @rishigc Před 3 lety

    Quick question - For Stage 1 and Stage 4, why is Spark using 6 partitions ? How does Spark derive the number of partitions which it will use to create a dataset or read an existing dataset ?

    • @rockthejvm
      @rockthejvm  Před 3 lety

      Unless previously determined, Spark will take the number of partitions equal to the number of cores on your machine.

    • @rishigc
      @rishigc Před 3 lety

      @@rockthejvm Hi Daniel, how can I find the number of cores my cluster has (thru Spark UI etc) and how many cores are being used by my application. Also, could you pls let me know if I can increase the number of cores from the %sql prompt ?

  • @naveenu8150
    @naveenu8150 Před 3 lety

    Hi, Could you please make video on SPARK UI

    • @rockthejvm
      @rockthejvm  Před 3 lety +1

      Check out the video on the DAGs! If you're interested in learning about the Spark UI in detail, the Spark Essentials course on the site does exactly that:
      rockthejvm.com/p/spark-essentials