Data validation between source and target table | PySpark Interview Question |

Sdílet
Vložit
  • čas přidán 27. 04. 2024
  • Hello Everyone,
    source_data = [(1,'A'),(2,'B'),(3,'C'),(4,'D'),(5,'E')]
    source_schema = ['id','name']
    source_df = spark.createDataFrame(source_data,source_schema)
    source_df.show()
    target_data = [(1,'A'),(2,'B'),(3,'X'),(4,'F'),(6,'G')]
    target_schema = ['id','name']
    target_df = spark.createDataFrame(target_data,target_schema)
    target_df.show()
    This series is for beginners and intermediate level candidates who wants to crack PySpark interviews
    Here is the link to the course : www.geekcoders.co.in/courses/...
    #pyspark #interviewquestions #interview #pysparkinterview #dataengineer #aws #databricks #python
  • Zábava

Komentáře • 8

  • @nishirajnikku969
    @nishirajnikku969 Před 2 měsíci

    I request you to please create a playlist for Pyspark Unit testing .

  • @rishabhkesarwani-br2rx
    @rishabhkesarwani-br2rx Před 2 měsíci +2

    I do below steps to compare source vs target table
    1) Count should be matching in source and target table
    2) Schema should be matching in source and target table
    3) Use the except and to check if any records are there which are present in source and not in target or vice versa.
    4) Use the left anti join to find out the records which are not matching.
    5) Trying to debug why there is record mismatch

  • @jhonsen9842
    @jhonsen9842 Před 3 měsíci +3

    Main Problem i found in learning Pyspark is brackets every time it gives me some error.

  • @gudiatoka
    @gudiatoka Před 3 měsíci +1

    exceptAll can be usefull too or anti join

    • @GeekCoders
      @GeekCoders  Před 3 měsíci

      Except all may miss the null value sometime

  • @shivamchandan50
    @shivamchandan50 Před 2 měsíci

    plz make video on pyspark unit testing