Master Reading Spark Query Plans

Sdílet
Vložit
  • čas přidán 26. 07. 2024
  • Spark Performance Tuning
    Dive deep into Apache Spark Query Plans to better understand how Apache Spark operates under the hood. We'll cover how Spark creates logical and physical plans, as well as the role of the Catalyst Optimizer in utilizing optimization techniques such as filter (predicate) pushdown and projection pushdown.
    The video covers intermediate concepts of Apache Spark in-depth, detailed explanations on how to read the Spark UI, understand Apache Spark’s query plans through code snippets of various narrow and wide transformations like reading files, select, filter, join, group by, repartition, coalesce, hash partitioning, hashaggregate, round robin partitioning, range partitioning and sort-merge join. Understanding them is going to give you a grasp on reading Spark’s step-by-step thought process and help identify performance issues and possible optimizations.
    📄 Complete Code on GitHub: github.com/afaqueahmad7117/sp...
    🎥 Full Spark Performance Tuning Playlist: • Apache Spark Performan...
    🔗 LinkedIn: / afaque-ahmad-5a5847129
    Chapters:
    00:00 Introduction
    01:30 How Spark generates logical and physical plans?
    04:46 Narrow transformations (filter, select, add or update columns) query plan explanation
    09:02 Repartition query plan explanation
    12:57 Coalesce query plan explanation
    17:32 Joins query plan explanation
    23:23 Group by count query plan explanation
    27:04 Group by sum query plan explanation
    28:05 Group by count distinct query plan explanation
    33:59 Interesting observations on Spark’s query plans
    36:56 When will predicate pushdown not work?
    39:07 Thank you
    #ApacheSpark #SparkPerformanceTuning #DataEngineering #SparkDAG #SparkOptimization

Komentáře • 118

  • @afaqueahmad7117
    @afaqueahmad7117  Před 11 měsíci +10

    🔔🔔 Please remember to subscribe to the channel folks. It really motivates me to make more such videos :)

  • @ridewithsuraj-zz9cc
    @ridewithsuraj-zz9cc Před 6 dny

    This is the most detailed explanation I have ever seen.

  • @roksig3823
    @roksig3823 Před 8 měsíci +1

    Thanks a bunch. To my knowledge, no one has explained Spark explain function this detailed level. Very in-depth information.

  • @neelbanerjee7875
    @neelbanerjee7875 Před měsícem +1

    Absolute gem ❤❤ would like to have video on handling real time scenarios (handle slow running job, oom etc)..

  • @1994salahuddin
    @1994salahuddin Před 11 měsíci +3

    Proud of you brother, looking forward to more of such videos. Great job!

  • @satyajitmohanty5039
    @satyajitmohanty5039 Před 7 dny

    Explanation is so good

  • @SidharthanPV
    @SidharthanPV Před 11 měsíci

    This is one of the best video about Spark I have seen recently!

  • @YoSoyWerlix
    @YoSoyWerlix Před 5 měsíci

    Afaque, THANK YOU SO MUCH FOR THESE VIDEOS!!
    They are so amazing for a fast paced learning experience.
    Hope you soon upload much more!!

  • @shubhamwaingade4144
    @shubhamwaingade4144 Před 6 měsíci

    One of the best videos I have seen on Spark, waiting for your Spark Architecture Video

  • @user-ue4ul1ru2n
    @user-ue4ul1ru2n Před 8 měsíci

    Thanks for such an in-depth overview!! helps a lot to grow!!

  • @yashwantdhole7645
    @yashwantdhole7645 Před 25 dny

    You are a gem bro. The content that you bring here is terrific. ❤❤❤

  • @saptorshidana7903
    @saptorshidana7903 Před 11 měsíci +2

    Amazing content.. I am a newbie into Spark but I am hooked.. Sir plz post the continued series.. awaiting for your video posts.. Amazing teacher

  • @iamexplorer6052
    @iamexplorer6052 Před 8 měsíci

    no one teaches detailed way complex things like you no matter what please spread you're knowledge to world i am sure there must be people learn from you , remember you as master life long who settled in it job like me

  • @OmairaParveen-uy7qt
    @OmairaParveen-uy7qt Před 11 měsíci +1

    Explained the concept really well!

  • @saravananvel2365
    @saravananvel2365 Před 11 měsíci

    Very useful and explaining complex things in easy manner . Thanks and expect more videos from you

  • @adityasingh8553
    @adityasingh8553 Před 11 měsíci +1

    This takes me back to me YaarPadhade times. Great work Bhai much love!

  • @psicktrick7667
    @psicktrick7667 Před 8 měsíci +1

    rare content! please don't stop making these

  • @anirbansom6682
    @anirbansom6682 Před 8 měsíci

    My today's well spent 40 mins. Thanks for the knowledge sharing.

  • @GuruBala
    @GuruBala Před 8 měsíci

    It's great to see such useful contents in spark... an its helpful to understand clearer with your notes! you rock.... Thankless thanks !!

  • @vikasverma2580
    @vikasverma2580 Před 11 měsíci

    Bhai mera bhai 😍 Abto hazaro students aayenge bhai ke pass par Apne sabse pehle student ko mat bhulna bawa😜
    Very proud of you bhai... And i can guarantee every1 here that he is the best teacher that there is❤️

  • @abhishekmohanty9971
    @abhishekmohanty9971 Před 10 měsíci

    Beautifully explained. Many concepts got cleared. thanks a lot.Keep going.

  • @sandeepchoudhary3355
    @sandeepchoudhary3355 Před 5 měsíci

    Great content with practical knowledge. Hats off to you !!!

  • @dawidgrzeskow987
    @dawidgrzeskow987 Před 3 měsíci

    After looking for some time for best material which truly explains this topic, and try to dig deep enough you clearly delivered, thanks Afaque.

  • @venkatyelava8043
    @venkatyelava8043 Před 3 dny

    One of the cleanest explanation I ever come across on the internals of Spark. Really appreciate all the effort you are putting into making these videos.
    If you don't mind, May I know which text editor are you are using when pasting the Physical plan?

  • @maheshbongani
    @maheshbongani Před 10 měsíci

    It's a great video with a great explanation. Awesome. Thank you for such a detailed explanation. Please keep doing such content.

  • @myl1566
    @myl1566 Před 2 měsíci

    one of the best videos i came across on spark query plan explanation. Thank you! :)

  • @piyushjain5852
    @piyushjain5852 Před 9 měsíci

    Very useful, video man, thanks for explaining things in so much details, keep doing the good work.

  • @sudeepbehera5921
    @sudeepbehera5921 Před 5 měsíci

    Thank you so much for making this video. this is really very helpful.

  • @jnana1985
    @jnana1985 Před 11 měsíci

    Great explanation!!Keep uploading such quality content bro

  • @sanjayplays5010
    @sanjayplays5010 Před 7 měsíci

    This is really good, thanks so much for this explanation!

  • @PavanKalyan-vw2cp
    @PavanKalyan-vw2cp Před 4 měsíci

    Bro, you dropped this👑

  • @garydiaz8886
    @garydiaz8886 Před 9 měsíci

    This is pure gold, congrats bro , keep the good work

    • @afaqueahmad7117
      @afaqueahmad7117  Před 9 měsíci

      Thank you @garydiaz8886, really appreciate it! :)

  • @tahiliani22
    @tahiliani22 Před 7 měsíci

    This is really informative, such details are not even present in the O'Reilly Learning Spark Book. Please continue to make such content. Needless to say but I have already subscribed.

  • @varunparuchuri9544
    @varunparuchuri9544 Před 2 měsíci +1

    please do more vedios bro. love this one

    • @afaqueahmad7117
      @afaqueahmad7117  Před 2 měsíci

      Thank you @varunparuchuri9544, really appreciate it :)

  • @ManishKumar-qw3ft
    @ManishKumar-qw3ft Před 4 měsíci +1

    Bhai bhot bhadia content banaate ho. Love your vdos. Please keep it up. You have great teaching skills.

  • @ujvadeeppatil8135
    @ujvadeeppatil8135 Před 10 měsíci

    By far best content i have seen on explain query thing!!! Keep it brother. Good luck!

  • @AmitBhadra
    @AmitBhadra Před 11 měsíci

    Great content brother. Please post more 😁

  • @thecodingmind9319
    @thecodingmind9319 Před 6 měsíci

    Bro, I am beginner but i was able to understand everything. Really great content and ur explanations was also amazing. Please continue doing such great videos. Thanks a lot for sharing .

    • @afaqueahmad7117
      @afaqueahmad7117  Před 6 měsíci

      @thecodingmind9319 Thanks for the kind words, means a lot :)

  • @VenuuMaadhav
    @VenuuMaadhav Před měsícem

    By watching your first 15mins of youtube video and I am awed beyond my words.
    What a great explanation @afaqueahmad. Kudos to you!
    Please make more videos of solving real time scenarios using PySpark & Cluster configuration. Again BIG THANKS!

    • @afaqueahmad7117
      @afaqueahmad7117  Před měsícem

      Hey @VenuuMaadhav, thank you for the kind words, means a lot. More coming soon :)

  • @remedyiq8034
    @remedyiq8034 Před 5 měsíci

    "God bless you! Great video! Learned a lot"

  • @sarfarazmemon2429
    @sarfarazmemon2429 Před 3 měsíci

    Underrated pro max!

  • @crazypri8
    @crazypri8 Před 3 měsíci

    Amazing content! Thank you for sharing!

  • @dishant_22
    @dishant_22 Před 9 měsíci

    Great explanation.

  • @RahulGhosh-yl7hl
    @RahulGhosh-yl7hl Před 6 měsíci

    This was awesome!

  • @user-meowmeow1
    @user-meowmeow1 Před 3 měsíci

    this is gold. Thank you very much!

  • @Wonderscope1
    @Wonderscope1 Před 7 měsíci

    Great video thanks for sharing. I definitely subscribe

  • @shaheelsahoo8535
    @shaheelsahoo8535 Před 2 měsíci

    Great Content. Nice and Detailed!!

  • @prasadrajupericharla5545
    @prasadrajupericharla5545 Před 2 měsíci

    Excellent job 🙌

    • @afaqueahmad7117
      @afaqueahmad7117  Před měsícem

      Thanks @prasadrajupericharla5545, appreciate it :)

  • @MuhammadAhmad-do1sk
    @MuhammadAhmad-do1sk Před 2 měsíci

    Excellend content, please make more videos like this with deep understanding of "how stuff works"... Highly Appreciate it. Love from 🇵🇰

    • @afaqueahmad7117
      @afaqueahmad7117  Před 2 měsíci

      Thank you @MuhammadAhmad-do1sk for the appreciation, love from India :)

  • @CoolGuy
    @CoolGuy Před 9 měsíci

    I am sure that down the line, in a few years, you will cross 100k subscribers. Great content BTW.

    • @afaqueahmad7117
      @afaqueahmad7117  Před 9 měsíci +1

      Hey @CoolGuy , thanks man! Means a lot to me :)

  • @suman3316
    @suman3316 Před 11 měsíci

    Very Good explanation...Keep Going

  • @jjayeshpawar
    @jjayeshpawar Před měsícem

    Great Video!

  • @nikhilc8611
    @nikhilc8611 Před 8 měsíci

    You are awesome man❤

  • @crystalllake3158
    @crystalllake3158 Před 11 měsíci

    Thank you for taking the time to create such an in depth video for Spark Plans. This is very helpful !
    Would you also be able to explain Spark Memory Tuning ?
    How do we decide how much resources to allocate (driver mem, executors mem , num executors , etc for a spark submit ?
    Also Data Structures Tuning, Garbage Collection Tuning !
    Thanks again !

    • @afaqueahmad7117
      @afaqueahmad7117  Před 11 měsíci +1

      Thanks for the kind words @crystalllake3158 and the suggestion; currently the focus of the series is to cover all possible code level optimization. Resource level optimisations will come in much later, but no plans for the upcoming few months :)

    • @crystalllake3158
      @crystalllake3158 Před 11 měsíci

      Thanks ! Please do keep uploading, love your videos !

  • @niladridey9666
    @niladridey9666 Před 11 měsíci

    quality content

  • @sahilmahale7657
    @sahilmahale7657 Před 2 měsíci

    Bro please make more videos !!!

  • @chidellasrinivas
    @chidellasrinivas Před 8 měsíci

    I loved your explanation and understood it very well. Could you help me to understand at 23 mins, if we have join key as cid and group by region. how the hash partitioning works. will that consider both?

  • @kvin007
    @kvin007 Před 8 měsíci

    Great explanation! I love the simplicity of it! I wonder what is the app you use for having your Mac as a screenshot that you can edit with your iPad?

    • @afaqueahmad7117
      @afaqueahmad7117  Před 8 měsíci +1

      Thanks @kvin007! So, basically I join a zoom meeting with my own self and annotate, haha!

  • @Shrawani18
    @Shrawani18 Před 11 měsíci

    You were too good!

  • @venkateshkannan7398
    @venkateshkannan7398 Před 2 měsíci

    Great explanation man! Thank you! What's the editor that you use in the video to read query plans?

    • @afaqueahmad7117
      @afaqueahmad7117  Před měsícem

      Thanks @venkateshkannan7398, appreciate it. Using Notion :)

  • @mohitupadhayay1439
    @mohitupadhayay1439 Před měsícem

    Hi Afaque.
    Do we have any library or can we create a UDF for understanding why some records got corrupt while reading file?
    I have a nested XML file with large number of columns and I want to understand why some columns are going into corrupt. Couldn't find anything helpful online.
    This video would be greatly appreciated.

  • @tahiliani22
    @tahiliani22 Před 3 měsíci

    At the very end of the video 38:36, we see that the cast("int") filter is present in the parsed logical plan and Analyzed logical plan. I am a little confused as to when we refer those plans. Can you please explain?

  • @user-dv1ry5cs7e
    @user-dv1ry5cs7e Před 3 měsíci

    I am doing coalesce(1) and getting error as : Unable to acquire 65536 bytes of memory, got 0.
    But when i am doing repartition(1), it worked. Can you please explain what happens internally in this case?

  • @rajubyakod8462
    @rajubyakod8462 Před 5 měsíci

    if it is doing local aggregation before shuffling the data then why it will throw out of memory error while taking count of each key when the column has huge distinct values

  • @mohitupadhayay1439
    @mohitupadhayay1439 Před měsícem

    Just 10 minutes into this notebook and I am awed beyond my words.
    What a great explanation Afaque. Kudos to you!
    Please make more videos of solving real time scenarios using Spark UI and one on Cluster configuration too. Again BIG THANKS!

    • @afaqueahmad7117
      @afaqueahmad7117  Před měsícem

      Hi @mohitupadhayay1439, really appreciate the kind words, it means a lot. A lot coming soon :)

  • @udaymmmmmmmmmm
    @udaymmmmmmmmmm Před 7 měsíci

    Can you please prepare a video showing storage anatomy of data during job execution cycle? I am sure there are many aspiring spark students who may be confused about the idea of RDD or dataframe and how it access data through apis (since spark is in memory computation) during job execution. It will help many upcoming spark developers.

    • @afaqueahmad7117
      @afaqueahmad7117  Před 4 měsíci

      Hey @udaymmmmmmmmmm, I added this video recently on Spark Memory Management. It talks about storage and responsibilities or each of memory components during job execution. You may want to have a look at it :)
      Link here: czcams.com/video/sXL1qgrPysg/video.html

  • @mission_possible
    @mission_possible Před 10 měsíci

    Thanks for the content and when can we expect new video?

  • @TechnoSparkBigData
    @TechnoSparkBigData Před 11 měsíci

    In exchange hashpartitioning what is the significance of number 200? what does that mean?

    • @afaqueahmad7117
      @afaqueahmad7117  Před 11 měsíci +1

      200 is the default number of shuffle partitions. You can find the number here in this table by the property name "spark.sql.shuffle.partitions" spark.apache.org/docs/latest/sql-performance-tuning.html#other-configuration-options

  • @sangu2227
    @sangu2227 Před 4 měsíci

    I have doubt when the data will be distributed to executor is it before scheduling the task or after scheduling the task and who assign the data to executor

    • @afaqueahmad7117
      @afaqueahmad7117  Před 4 měsíci

      Hey @sangu2227, this requires an understanding of transformations/actions and lazy evaluation in Spark. Spark doesn't do anything (either scheduling a task or distributing data) until an action is called.
      The moment an action is invoked, Spark creates a logical -> physical plan and Spark's scheduler divides the work into tasks. Spark's driver and Cluster manager then distributes the data to the executors for processing :)

  • @nijanthanvijayakumar
    @nijanthanvijayakumar Před 11 měsíci

    Hello @afaqueahmad7117, thanks for the great video. While explaining repartition, you mentioned you’ve a video on the AQE. Please can you link that as well?

    • @afaqueahmad7117
      @afaqueahmad7117  Před 11 měsíci +1

      Thanks @nijanthanvijayakumar, yes that video is upcoming in the next few days :)

    • @nijanthanvijayakumar
      @nijanthanvijayakumar Před 11 měsíci

      Can't wait for that@@afaqueahmad7117
      These CZcams videos are so much more helpful. Hats down one of the best ones that explain the Spark performance tuning and internals in a very simplest of forms possible. Cheers!

  • @Pratik0917
    @Pratik0917 Před 7 měsíci

    Fab Cotenet

  • @TechnoSparkBigData
    @TechnoSparkBigData Před 11 měsíci

    Hi Sir, you mentioned that you referred AQE before. Can I get that link ? I want to know about AQE

  • @TechnoSparkBigData
    @TechnoSparkBigData Před 11 měsíci

    You mentioned that for coalesce(2) shuffle will happen, but later you mentioned that shuffle will not happen in case of coalesce hence no partitioning scheme. Could you please explain it in detail?

    • @afaqueahmad7117
      @afaqueahmad7117  Před 11 měsíci +1

      So, coalesce will only incur a shuffle if its a very aggressive situation. If the objective can be achieved by merging (reducing) the partitions on the same executor, it will go ahead with it. In case of coalesce(2), its an aggressive reduction in the number of partitions, meaning that Spark has no other option but to move the partitions. As there were 3 executors (in the example I referenced in the video), even if it reduced the partitions on each executor to a single partition, it would end up with 3 partitions in total, therefore it incurs a shuffle to have 2 final partitions :)

    • @TechnoSparkBigData
      @TechnoSparkBigData Před 11 měsíci

      @@afaqueahmad7117 Thanks for clarification.

  • @ZafarDorna
    @ZafarDorna Před 5 měsíci

    Hi Afaque, how can I download the data files you are using? I want to try it hands on :)

    • @afaqueahmad7117
      @afaqueahmad7117  Před 5 měsíci

      Should be available here: github.com/afaqueahmad7117/spark-experiments :)

  • @TJ-hs1qm
    @TJ-hs1qm Před 23 dny

    What drawing board are you using for those notes?

    • @afaqueahmad7117
      @afaqueahmad7117  Před 19 dny +1

      Using "Notion" for text, "Nebo" on iPad for the diagrams

    • @TJ-hs1qm
      @TJ-hs1qm Před 19 dny

      ​@@afaqueahmad7117cool thx!

  • @bhargaviakkineni
    @bhargaviakkineni Před 2 měsíci

    Hi sir i came across a doubt
    Consider the executor size 1gb/executor. We have 3 executors and intially 3 gb data gets distributed across 3 executors each executor is having 1gb partition after various transformations we came across a requirment to decrease the number of partitions to 1 partition for that we will use repartition(1) or coalesce(1). In this scenario all the 3 partitions will merges to 1 partition each partition is having size of 1 gb approximately. Collectively all the partitions size is 3 gb approximately. When repartition (1) or coalesce(1) all the 3 gb data should sit in 1 executor having capicity of 1gb only. So here the data is execeeding the executor size what happens in this scenario. Could you please make video on this requesting sir.

    • @afaqueahmad7117
      @afaqueahmad7117  Před 2 měsíci

      Hi @bhargaviakkineni, In the scenario you described above where the resulting partition size (3 GB) exceeds the memory available on a single executor (1 GB), Spark will attempt to spill data to disk. The spill to disk is going to help the application from crashing due to out-of-memory errors however, there is going to be a performance impact associated, because disk IO is slower.
      On a side note, as a best practice, It’s best to also think/re-evaluate the need to write to a single partition. Avoid writing to a single partition, because it generally creates a bottleneck if the sizes are large. Try to balance out the partitions with the resources of the cluster (executors/cores).
      Hope that clarifies :)

  • @NiranjanAnandam
    @NiranjanAnandam Před měsícem

    Local distinct on cust id doens't make sense and couldn't understand. How globally it does distinct count if the count is already computed. The reasoning behind why cast doens't push down predicate is not clearly explained and just as it's mentioned in the doc

  • @Precocious_Pervez
    @Precocious_Pervez Před 11 měsíci

    Great Work buddy keep it up .... love your content, very simple to understand @Afaque Ahmed