Apache Spark - 04 - Architecture - Part 2

Sdílet
Vložit
  • čas přidán 7. 09. 2024
  • Spark Programming and Azure Databricks ILT Master Class by Prashant Kumar Pandey - Fill out the google form for Course inquiry.
    forms.gle/Nxk8...
    -------------------------------------------------------------------
    Data Engineering using is one of the highest-paid jobs of today.
    It is going to remain in the top IT skills forever.
    Are you in database development, data warehousing, ETL tools, data analysis, SQL, PL/QL development?
    I have a well-crafted success path for you.
    I will help you get prepared for the data engineer and solution architect role depending on your profile and experience.
    We created a course that takes you deep into core data engineering technology and masters it.
    If you are a working professional:
    1. Aspiring to become a data engineer.
    2. Change your career to data engineering.
    3. Grow your data engineering career.
    4. Get Databricks Spark Certification.
    5. Crack the Spark Data Engineering interviews.
    ScholarNest is offering a one-stop integrated Learning Path.
    The course is open for registration.
    The course delivers an example-driven approach and project-based learning.
    You will be practicing the skills using MCQ, Coding Exercises, and Capstone Projects.
    The course comes with the following integrated services.
    1. Technical support and Doubt Clarification
    2. Live Project Discussion
    3. Resume Building
    4. Interview Preparation
    5. Mock Interviews
    Course Duration: 6 Months
    Course Prerequisite: Programming and SQL Knowledge
    Target Audience: Working Professionals
    Batch start: Registration Started
    Fill out the below form for more details and course inquiries.
    forms.gle/Nxk8...
    --------------------------------------------------------------------------
    Learn more at www.scholarnes...
    Best place to learn Data engineering, Bigdata, Apache Spark, Databricks, Apache Kafka, Confluent Cloud, AWS Cloud Computing, Azure Cloud, Google Cloud - Self-paced, Instructor-led, Certification courses, and practice tests.
    ========================================================
    SPARK COURSES
    -----------------------------
    www.scholarnes...
    www.scholarnes...
    www.scholarnes...
    www.scholarnes...
    www.scholarnes...
    KAFKA COURSES
    --------------------------------
    www.scholarnes...
    www.scholarnes...
    www.scholarnes...
    AWS CLOUD
    ------------------------
    www.scholarnes...
    www.scholarnes...
    PYTHON
    ------------------
    www.scholarnes...
    ========================================
    We are also available on the Udemy Platform
    Check out the below link for our Courses on Udemy
    www.learningjo...
    =======================================
    You can also find us on Oreilly Learning
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    www.oreilly.co...
    =========================================
    Follow us on Social Media
    / scholarnest
    / scholarnesttechnologies
    / scholarnest
    / scholarnest
    github.com/Sch...
    github.com/lea...
    ========================================

Komentáře • 112

  • @ScholarNest
    @ScholarNest  Před 3 lety

    Want to learn more Big Data Technology courses. You can get lifetime access to our courses on the Udemy platform. Visit the below link for Discounts and Coupon Code.
    www.learningjournal.guru/courses/

    • @mjeewani123
      @mjeewani123 Před 2 lety

      I really like your content, very easy to understand. THANK YOU. Have you covered any where how RDD helps recover from fault tolerance?

  • @iitgupta2010
    @iitgupta2010 Před 5 lety +18

    This is the best video....wt a explanation sir..mind blowing. I feel bad where bad teacher get too much attention where people like you don't get much...
    U r brilliant

  • @philippederome2434
    @philippederome2434 Před 5 lety +11

    I love the curtains opening up special effect!

  • @sethiabhithemaverick
    @sethiabhithemaverick Před 6 lety +4

    This is the easiest and most explanatory explanation of complete spark architecture one can ever get

  • @davidpeng8431
    @davidpeng8431 Před rokem

    Your video is one of the best for Spark, not spend too much on theory and high level, but down to the earth, very practical.

  • @manasamathi7157
    @manasamathi7157 Před 2 lety

    Even beginners can understand the flow who has zero knowledge about spark.. Great explanation 😊

  • @SachinChavan13
    @SachinChavan13 Před 4 lety +1

    Wow ! Very crisp and to-the-point explanation. Really helpful. Thank you Prashant!

  • @sthirumalai
    @sthirumalai Před 4 lety

    I started learning Spark by enrolling to a self learning course in UDEMY but by far this is the best video i have ever watched which explained the core concepts of SPARK clear and precise. I appreciate your efforts

  • @philippederome2434
    @philippederome2434 Před 5 lety +2

    I like the animal logos for the 3 APIs, turtle for RDD (slowest), cat for SQL-Dataset (medium), rabbit for DataFrame (fastest), but see Brian M. Clapper recent video on Frameless API (fast, compile-safe, and more functional, i.e. can compose actions).

  • @muhammadarslanbhatti2139

    Hands down the best explanation you'll find on youtube

  • @rajatsharma6137
    @rajatsharma6137 Před 3 lety

    extremely lucid and to the point...congrats !

  • @franciscovinueza5320
    @franciscovinueza5320 Před 4 lety

    Images, Colors, examples and clear explanations! This video has everything! Keep up the good work! Thank you Sir.

  • @shubhampatil2391
    @shubhampatil2391 Před rokem

    Thank you for the great content!! just one request though please add a highlighter to your pointer it is kind of hard to track its movement and often have to rewind to check what actually you clicked on

  • @maheshpoddar4065
    @maheshpoddar4065 Před 4 lety

    Exceptional way of explaining and making concepts crystal clear. I am enjoying it the way I used to enjoy your earlier videos on Hadoop.

  • @gefeizhu3953
    @gefeizhu3953 Před 5 lety +1

    Fantastic video,I have subscribed your video!

  • @anilpatil3056
    @anilpatil3056 Před 6 lety +1

    Highly recommended for every Spark newbee. BTW thanks a lot..

  • @erpoojadak
    @erpoojadak Před 6 lety +1

    the best tutorial i have ever seen..simply awesome

  • @abhishekt450
    @abhishekt450 Před 4 lety

    Just brilliant 👌.. point to point..

  • @Nasirmah
    @Nasirmah Před 5 lety

    It should be a=> (a(1),1)) to get the second field if you want the result to be as shown in 9:30. First field of the array is empty string, but you can still reduce by key since it will be all empty string at the end but not if all files didnt contain /etc root path. I find it useful to run collect() at each step like kvRDD.collect() to see. Thank you very much for the best spark tutorial, I let the adds run to help out.

  • @sandeepkumarvadde
    @sandeepkumarvadde Před 5 lety

    This is an extraordinary explanation of spark architecture.
    Sir, please pick a few examples to implement on cluster mode too.

  • @nishantgupta8562
    @nishantgupta8562 Před 6 lety

    Best video by far.. What a teacher you are.

  • @pratiksingh9480
    @pratiksingh9480 Před 3 lety

    Hi Prashant Sir ,
    First things first :
    I am planning to take-up the course . Your explanation viz etc. are awesome kudos for that. The only thing that concerns me is that I have lot of questions when I study anything , some silly as well.
    Is there any channel (Slack/Discord etc for enrolled students) , where the doubts are cleared. Some AMA kind of sessions etc , becuse going through stuffs and having uncleared doubts will leave a learner is almost the same state. Will share the same message with you over lnkedin as well , not sure how frequently you look into CZcams comments.

  • @sanjaysoni5017
    @sanjaysoni5017 Před 4 lety

    Awesome video with correct explanation.

  • @shivam.shakya
    @shivam.shakya Před 2 lety

    Great video

  • @bhavaniv1721
    @bhavaniv1721 Před 5 lety

    Please post more videos,I following all Ur video,Ur videos are something different to others.....it easily understandable way

    • @pratiksingh9480
      @pratiksingh9480 Před 3 lety

      you mean Data Savvy :P , Yeah Prashant is really good in explanation

  • @malapatiprasanna
    @malapatiprasanna Před 5 lety

    Thanks a lot, sir for your outstanding efforts in making us brilliant. Could you please add some more spark videos shared variables, detailed transformations, and actions. Really I am double satisfied with your explanations, going forward we want to see more from you on spark.

  • @sachinpatil4218
    @sachinpatil4218 Před 4 lety

    Outstanding..

  • @raunakgpt
    @raunakgpt Před 4 lety

    Very Good video. Thanks sir. But I didn't anything with Apache Spark -05 in Playlist. Do we have some more videos on architecture?

  • @AzharHussain2u
    @AzharHussain2u Před 3 lety

    just awesome

  • @vijaykumar-wq9db
    @vijaykumar-wq9db Před 4 lety

    Thank you sir...super video

  • @deepanshunagpal6440
    @deepanshunagpal6440 Před 3 lety

    nicely explained.

  • @kannanarumugam9257
    @kannanarumugam9257 Před 6 lety

    Thank you ver much!, nicely explained spark architecture. there is no other better way than this.. keep the good work.!

  • @pramodswain6043
    @pramodswain6043 Před 6 lety

    It is really appreciated,i never ever seen an explanation like this,so thanks a lots sir for revealing such extraordinary skills.....

  • @premrajkumar6910
    @premrajkumar6910 Před 6 lety

    Nice video with very clear explanation. But We will have to wait very long for a new session . Please try to upload fast, otherwise it will take a year to learn Spark.

  • @khelifakemouche4070
    @khelifakemouche4070 Před 5 lety

    Great tutorial and Excellent teaching

  • @akashhudge5735
    @akashhudge5735 Před 4 lety

    nice explanation

  • @damodharable
    @damodharable Před 6 lety +2

    excellent teaching skills,thanks a lot :)

  • @KoushikPaulliveandletlive

    just too good, you need too much of knowledge, when you can explain the complex things such easily

  • @avijitmukherjee678
    @avijitmukherjee678 Před 4 lety

    Thanks so much, Sir

  • @Vihaan_Nigam16
    @Vihaan_Nigam16 Před 5 lety

    Excellent way of teaching ...Thank you

  • @elvinanoronha6032
    @elvinanoronha6032 Před 5 lety

    Awesome explanation !!!!!

  • @tejaswianagani8756
    @tejaswianagani8756 Před 5 lety

    Very very good explaination sir, am very much thankfull to you.

  • @helloworld4u
    @helloworld4u Před 3 lety

    Thankyou

  • @TheVikash620
    @TheVikash620 Před 6 lety

    Great explanation sir. Waiting for new concepts to be covered in future videos.

  • @sachinhaldankars
    @sachinhaldankars Před 5 lety

    Simply Awesome explanation...

  • @repsycled1605
    @repsycled1605 Před 6 lety

    One of the best video series for learning .. Do you also provide classroom trainings as well

  • @sanjaykumarmahapatra
    @sanjaykumarmahapatra Před 6 lety

    Nice way of explanation. Thank you so much for your effort on making so nice tutorials. I am becoming a fan of you man! keep it up (Y)

  • @vishalteli7343
    @vishalteli7343 Před 5 lety

    Simply Best!

  • @RahulEdvin
    @RahulEdvin Před 5 lety

    excellent explanation !

  • @asksmruti
    @asksmruti Před 6 lety

    Your tutorials are simply awesome.. :-) Super Like

  • @ramkumarananthapalli7151

    Hi
    Thanks a lot for these videos. They are quite helpful. In this video you mentioned that RDD is immutable, but you have overridden same RDD right, by changing number of partitions. Also we can load different text file into the same named variable(RDD). Could you explain how it is immutable in this case.
    Thanks in advance for your help.

  • @pc0riginal870
    @pc0riginal870 Před 5 lety

    thank you so much from the bottom of my heart. god make you happy.

  • @csharma8732
    @csharma8732 Před 5 lety

    Very nice video sir. Thank you.

  • @damodargoud6263
    @damodargoud6263 Před 5 lety

    thanks for sharing your knowledge.

  • @althafmohammed5285
    @althafmohammed5285 Před 6 lety

    It's really amazing it's really real time level

  • @user-ol6xv2gq3k
    @user-ol6xv2gq3k Před 5 lety

    very very amazing . thank you

  • @paritoshahuja5058
    @paritoshahuja5058 Před 5 lety

    Really amazing explanation thank you

  • @nationviews6760
    @nationviews6760 Před 6 lety

    Thank you so much, Sir, for providing such a nice practical explanation.

  • @sagarsinghrajpoot3832
    @sagarsinghrajpoot3832 Před 5 lety

    Great video 🤓🤓sir

  • @sunilgaikwad3254
    @sunilgaikwad3254 Před 5 lety

    Hello Sir, loved this tutorial..thanks a lot.
    I have one doubt, consider following scenario:
    Input data size(raeding from hdfs): 20 GB
    No of executors: 2
    executor memory : 8 GB
    RDD partition factor: 2
    and we run a spark job in client mode.
    So in this case:
    1. how total 20GB data will get processed through sparkjob?
    2. How many stages and task will get created?
    3. how total 20gb data will be partitioned?

    • @ScholarNest
      @ScholarNest  Před 5 lety

      1. Do you need 20GB memory to process 20 GB data? No. More memory can improve performance but you can still process it with less memory.
      2. Stages depend on your logic and the number of task on executors.
      3. You asked for two partition so it will shuffle and make it two in that stage. Next stage depends on other factors.

  • @AIMLBites
    @AIMLBites Před 5 lety

    Thanks for the wonderful explanation in this video. Can you please tell if this is a general scenario for each job in spark, that map and reducebykey operations usually run in 2 different stages always or if there are cases that they can run in a single stage as well. Any examples or leads would be appreciated!

    • @ScholarNest
      @ScholarNest  Před 5 lety

      Think about it. Map and Reduce? You are already talking about two stages.

  • @a143r
    @a143r Před 5 lety

    xcellent sir....!

  • @sbylk99
    @sbylk99 Před 5 lety

    Best tutorial thank you!

  • @4ukcs2004
    @4ukcs2004 Před 6 lety

    Sincerely looking for spark streaming with Kafka tutorial sir...when r u pubishing sir..you are the best..

  • @annaynomouse2821
    @annaynomouse2821 Před 3 lety

    How do you create animation shown from 4:50 to 4:55. Which software. I like how you bring clarity visually.

  • @143badri
    @143badri Před 5 lety

    What is the default number of partitions if we are not defining it...

  • @NareshJadapalli236
    @NareshJadapalli236 Před 5 lety

    I am confused in one step.
    When we say, RDD distributes data into nodes.
    We create 5 partitions from RDD. It means RDD has loaded all the data and do partitioning, is it?
    Will it load data from different nodes to the driver node and keep it in memory and distribute across?
    If yes, it is not following data locality paradigm and data movement is very costly. (I am sure spark follows data locality)
    What am I missing?

  • @kidslearningscience
    @kidslearningscience Před 5 lety

    A supplementary video with Amazon EMR please.

  • @AwaraGhumakkad
    @AwaraGhumakkad Před 4 lety

    Sir i have executed the textFile() command with 5 partitions in cluster mode (5 W) but every time I could see that job is being executed only 1 of the workers.
    I mean in every run only 1 worker was executing all the partitions.
    is there any extra configuration required here.
    i am using spark-shell mode

    • @AwaraGhumakkad
      @AwaraGhumakkad Před 4 lety

      Please ignore this i got my answer .
      thanks anyways

  • @NikhilKekan
    @NikhilKekan Před 5 lety

    Hello,Great tutorial.
    Can you please elaborate more on reduceByKey((x,y) => x+y) that you have used to count number of pairs with same key.
    I am a bit confused how x+y will give us the total count

    • @xmankamal
      @xmankamal Před 5 lety +2

      Here reduceByKey is aggregating the result of array (for similar key) to one value.
      Suppose you have (key, value) list :- List((hello, 1), (world, 1), (hello, 1), (hello, 2)).
      reduceByKey will perform operation on similar key and x, y denotes the value only from key, value pair (you cannot perform operation on key here)
      For key: hello
      rdd.reduceByKey(x,y => x+y) -equivalent to (1,1 => 1+1) => List((hello, 2), (world, 1), (hello, 2))
      There are still pair exists belongs to hello key here. so again operation will be perform
      rdd.reduceByKey(x,y => x+y) -equivalent to (2,2 => 2+2) => List((hello, 4), (world, 1))
      Now list has only one hello key pair, so no further reduction can be possible here.

  • @surajpillai2117
    @surajpillai2117 Před 5 lety

    hello... I had a question. The intermediate RDDs which are generated, the partitioned data under them would also be distributed to the executors? or would the redistribution only happen on an action? Please help! :)

    • @ScholarNest
      @ScholarNest  Před 5 lety

      Everything is lazy so nothing happens until an action is executed.

  • @amirboutaghou274
    @amirboutaghou274 Před 5 lety

    hello , so first of all i want to thank you for this superb tutorial. please i have one question following your example of imagin we have 10 partitions and 2 executor and we lets suppose in this example we dont have transformation that gona cause shuffle how many task parrallel there is it 5 ?
    thank in advance for your answear

    • @ScholarNest
      @ScholarNest  Před 5 lety +1

      Why do you think it's going to be 5?
      Because 10 partitions /2 executors?
      Number of executor have nothing to do with how many tasks are created. Once tasks are created, they will run on only two executors because you have only two executors.

    • @amirboutaghou274
      @amirboutaghou274 Před 5 lety

      @@ScholarNest first of all thank for your quick reply. so i undersntand in my example number of task created per stage depend only by number of partition . number of execeutor have nothing to with it im a correct plaese ? so i my example i will still 10 task because i have 10 partition ?

  • @Modern_revolution
    @Modern_revolution Před 5 lety

    Super happy

  • @rakeshsahoo16
    @rakeshsahoo16 Před 4 lety

    Why proc and opt came in 1 partition ??

  • @tajirapb
    @tajirapb Před 5 lety

    With spark 2.3.2, number of elements within each partition is not being displayed by the code that your have shown.

  • @abhishekbarnwal5867
    @abhishekbarnwal5867 Před 5 lety

    I am using spark 2.2.0 but the code shown by you in the video doesn't print any output in shell.
    val myrdd = sc.textFile("UserData.txt",4)
    myrdd.foreachPartition(x => println("No. of elements in partition: " + x.count(y=>true)))
    Please share the workable code.

  • @premrajkumar6910
    @premrajkumar6910 Před 6 lety

    Also if possible please explain the code using Java APIs too. I am doing development using Java API, but some methods are not supporting even after it's mentioned in API document and throwing run-time error. Is that when we are doing development using Java API or Python API, will it get converted to Scala language internally?

  • @robind999
    @robind999 Před 5 lety

    Very good one .any airflow demo?

    • @ScholarNest
      @ScholarNest  Před 5 lety

      That's still incubating...I do not use open source until they graduate to become production ready.

    • @robind999
      @robind999 Před 5 lety

      @@ScholarNest Thank you so much for this info.

  • @nidhidewan5173
    @nidhidewan5173 Před 6 lety

    waiting for more videos :)

    • @ScholarNest
      @ScholarNest  Před 6 lety

      +Nidhi Dewan, coming soon.... I am slightly busy to code a website for learning journal. Just another week away from the release.

    • @tushibhaque863
      @tushibhaque863 Před 6 lety

      I thank you from the deep of my heart for your hard work....

  • @nikhil199029
    @nikhil199029 Před 5 lety

    is reducybykey a spark/scala specefic function?

  • @gopinathGopiRebel
    @gopinathGopiRebel Před 5 lety

    Sir i have a doubt like how no of cores of executors and processing of partitions depend on ?

  • @DavidZYW
    @DavidZYW Před 6 lety

    thanks, I have a question, does the shuffle and sort executed in multiple nodes ?

    • @ScholarNest
      @ScholarNest  Před 6 lety

      Yes, every node that owns a partition must participate in shuffle & sort.

  • @KnowWorldsFact
    @KnowWorldsFact Před 5 lety

    Thanks Sir, Can you please give me link for part-3. I couldn't find

    • @ScholarNest
      @ScholarNest  Před 5 lety +1

      Check the playlist

    • @KnowWorldsFact
      @KnowWorldsFact Před 5 lety

      Thanks,will Check. you have explained all videos in very simple language. :)

  • @ravikirantuduru1061
    @ravikirantuduru1061 Před 5 lety

    I have one doubt is no of partitons is equal to no of executive?

  • @ravikirantuduru1061
    @ravikirantuduru1061 Před 5 lety

    Sir I have one doubt is no of partitions is equal to no of executors?

  • @jaineshmodi
    @jaineshmodi Před 6 lety

    Sir I am doing development using spring Kafka, could you please help me with consumer question? how do i poll in regular intervals e.g every 5 mins and how do I specify number of records to be read in every poll?
    I saw batch listener can be used to specify number of records to read but did not find polling interval option.
    Thanks.

    • @ScholarNest
      @ScholarNest  Před 6 lety

      +Jainesh Modi what is spring Kafka?

    • @jaineshmodi
      @jaineshmodi Před 6 lety

      Learning Journal sir I meant to say Kafka with spring boot

    • @ScholarNest
      @ScholarNest  Před 6 lety

      +Jainesh Modi have you seen my Kafka videos? I have discussed consumer APIs in detail. I am not sure what do you mean my batch listener?

    • @jaineshmodi
      @jaineshmodi Před 6 lety

      Yes sir i have gone through ur videos.
      my requirement is as a consumer i want to put delay with every poll and also want to control number of records being read in every poll.

    • @ScholarNest
      @ScholarNest  Před 6 lety

      +Jainesh Modi to be honest, I haven't used spring Kafka, just saw the documentation. Looks interesting. I will plan for some time to evaluate it and send you details if I find an answer to your problem.

  • @biswajitsarkar5538
    @biswajitsarkar5538 Před 6 lety

    Great explanation !!