Data validation between source and target table | PySpark Interview Question |
Vložit
- čas přidán 27. 04. 2024
- Hello Everyone,
source_data = [(1,'A'),(2,'B'),(3,'C'),(4,'D'),(5,'E')]
source_schema = ['id','name']
source_df = spark.createDataFrame(source_data,source_schema)
source_df.show()
target_data = [(1,'A'),(2,'B'),(3,'X'),(4,'F'),(6,'G')]
target_schema = ['id','name']
target_df = spark.createDataFrame(target_data,target_schema)
target_df.show()
This series is for beginners and intermediate level candidates who wants to crack PySpark interviews
Here is the link to the course : www.geekcoders.co.in/courses/...
#pyspark #interviewquestions #interview #pysparkinterview #dataengineer #aws #databricks #python - Zábava
I request you to please create a playlist for Pyspark Unit testing .
I do below steps to compare source vs target table
1) Count should be matching in source and target table
2) Schema should be matching in source and target table
3) Use the except and to check if any records are there which are present in source and not in target or vice versa.
4) Use the left anti join to find out the records which are not matching.
5) Trying to debug why there is record mismatch
Nice
Main Problem i found in learning Pyspark is brackets every time it gives me some error.
Yes
exceptAll can be usefull too or anti join
Except all may miss the null value sometime
plz make video on pyspark unit testing