Data Cleaning in Pandas | Python Pandas Tutorials
Vložit
- čas přidán 13. 06. 2024
- Take my Full Python Course Here: www.analystbuilder.com/course...
In this series we will be walking through everything you need to know to get started in Pandas! In this video, we learn about Data Cleaning in Pandas.
Datasets in GitHub:
github.com/AlexTheAnalyst/Pan...
Code in GitHub: github.com/AlexTheAnalyst/Pan...
Favorite Pandas Course:
Data Analysis with Pandas and Python - bit.ly/3KHMLlu
____________________________________________
SUBSCRIBE!
Do you want to become a Data Analyst? That's what this channel is all about! My goal is to help you learn everything you need in order to start your career or even switch your career into Data Analytics. Be sure to subscribe to not miss out on any content!
____________________________________________
RESOURCES:
Coursera Courses:
📖Google Data Analyst Certification: coursera.pxf.io/5bBd62
📖Data Analysis with Python - coursera.pxf.io/BXY3Wy
📖IBM Data Analysis Specialization - coursera.pxf.io/AoYOdR
📖Tableau Data Visualization - coursera.pxf.io/MXYqaN
Udemy Courses:
📖Python for Data Analysis and Visualization- bit.ly/3hhX4LX
📖Statistics for Data Science - bit.ly/37jqDbq
📖SQL for Data Analysts (SSMS) - bit.ly/3fkqEij
📖Tableau A-Z - bit.ly/385lYvN
Please note I may earn a small commission for any purchase through these links - Thanks for supporting the channel!
____________________________________________
BECOME A MEMBER -
Want to support the channel? Consider becoming a member! I do Monthly Livestreams and you get some awesome Emoji's to use in chat and comments!
/ @alextheanalyst
____________________________________________
Websites:
💻Website: AlexTheAnalyst.com
💾GitHub: github.com/AlexTheAnalyst
📱Instagram: @Alex_The_Analyst
____________________________________________
0:00 Intro
0:41 First Look at Data
2:34 Removing Duplicates
3:41 Dropping Columns
5:10 Strip
12:15 Cleaning/Standardizing Phone Numbers
21:29 Splitting Columns
24:58 Standardizing Column Values using Replace
28:40 Fill Null Values
29:42 Filtering Down Rows of Data
36:42 Outro
All opinions or statements in this video are my own and do not reflect the opinion of the company I work for or have ever worked for
For those struggling with the regular expression at 14:57 , you might need to explicitly assign regex = True (based on the FutureWarning displayed in the video). That is:
df['Phone_Number'] = df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex=True)
gosh you're observant
Thank you!
My goodness. You saved me. I’ve been at this for about an hour. Thank you 🙏 thank you 🙏
Thanks a lot dude !!!!!! Helped a lot !!!!!!!
Legend.
Fan from India I just got 2 offers from very good companies thanks to your videos and it helped me transition from a customer success support to Data Analyst
Hey tell me how can I do it too ri8 now I'm working as a customer support executive please help me to grow..
hey Rahul, how do you learn DA ? Can you share your experience it will be helpful for us!!
Hi bro is this course sufficient for beginner to land a job
Is this a spam comment?
@rozakhan2811 skills need is a basic thing...what you want..in that be strong..And way of Alex Teach Videos are Effective..
For splitting the address at 21:29, you may want to add a named parameter to the value of 2, as in n=2:
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', n=2, expand=True)
This helps! Thank you so much!
Thank you very much
thank you very much
Thank you!
OMG! Thank you so very much. I have been trying to figure this out for about four days now. I figured out the phone number issue and then how to split the address, but for the life of me splitting the address into named columns with the changes committed the df was not working. THANK YOU!
Found this REALLY helpful! I love how you walk us through mistakes as well as explain WHY you do what you do throughout your videos. It adds so much value to each video. As always, THANK YOU ALEX!!
This is one of the best videos regarding data cleaning I have ever watched. Really crisp and covers almost all the important steps. It also dives deep into concepts that are really important, but you rarely see anybody applying them.
Must watch for everybody, who is looking to get into data field or are already in the field.
Glad to hear it!
I like how in some of your videos you show us the long way and then the short cut, instead of just showing the short cut. I think that way gives the person who is learning a better breakdown of what they are doing.
For the address column: df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(",", n=2, expand = True). Defining only 2 was giving me an error. so i had to change it to n=2
This helped me, thank you! However, what does '"n" mean?
n=2 parameter indicates that the split should occur at most two times, producing three resulting parts.@@DreaSimply21
Thank you for this. It helped me a great deal
Some of the phone numbers are removed while doing the formatting. If you look in the excel file, you'll see that some of the numbers are strings and some are integers. When you run the string method during the formatting, it replaces the numeric values with NaN and they are later removed completely. If you want to avoid losing that data you'll need to use
df["Phone_Number"] = df["Phone_Number"].astype(str)
before formatting. You also won't need to convert to string in the lambda after doing this.
If you want to replace the empty values in No Not Contact you'll need to use
df["Do_Not_Contact"].astype(str).replace("","N")
Technically those values are not empty, they are NaNs which is why replace is giving them 'NNN' instead of just the one 'N'. It's treating it as if NaN equals three blank spaces
that's what i've noticed too, great work
You are a genius, thanks :)
Thanks man, this worked.
Thanks for this content, this was so helpful!!
I think i have some optimizations, correct me if im wrong :D
27:04 instead of calling the replace function multiple times, you can create a mapping just like: replace_mapping = {'Yes': 'Y', 'No': 'N'} and call it like: df = df.replace(replace_mapping), so you dont have to specify mapping for each column and need to call .replace() just once.
34:16 instead of the for loop + manually dropping row per row, you can make use of the .loc function like: df = df.loc[df["Do_Not_Contact"] == "N"] in order to filter the rows based on filter criterium.
Where did you learn that you could use a dictionary format to replace multiple values in one line? this is really useful, thanks!
Thank You. 34:16 is really helpful. I appreciate your kindness.
I really like when you make mistakes, because it tells that no one perfect. I sometimes anxious when I watch tutorials and they seem to be so good. You also implicate the struggles that you experiencing throughout the process is real. Thanks for the tutorial Alex.
I discovered that replace() has an argument regex (regular expression). It is set as regex = True but when we change it to regex = False, it only looks for exact matches, meaning it won't change 'Yes' to 'Yeses', only 'Y' to 'Yes'. We can write df["Paying Customer"].replace('Y', 'Yes', regex = False) and it will work as expected.
mine didnt work lol
Thank you sir, you can't imagine how i fill confident in cleaning data after completing this video with real data practices. Thank you once again.
thank you for your work Alex! I went through the entire video 1 by 1 twice and I can tell I learned a lot from this video , finally understanding why we need to learn Loops etc. and how simple cleaning methods work on Jupyter.
If the df["Phone_Number"].replace('[^a-zA-Z0-9]', ''") is not working for you. Try, df["Phone_Number"].replace('[^a-zA-Z0-9]', ''", regex=True)
Thanks!
Hi, Thanks,
If I try this, Index 2 , 11 and 17 becomes NAN when originally they are in correct format, Kindly help
Thanks a ton, been looking for it for almost a week
Thanxxxxxsss aaa lotttt🙌
Simply amazing! Well-explained and comprehensive. Loved it!
I enjoyed working on this project. Thank you Alex and a huge thank you to those guys who helped in the struggling minutes!
If you're getting an error when trying to split the address, this is what worked for me; I had to remove the number of values to look for.
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', expand=True)
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(pat=',', n=2, expand=True) use this you have to include pat
thank you!
what does that exactly?
Thank you for this video. I just finished this part of the data analytics course and I definitely learned something new and helpful.
This is the best video I have ever watched on data cleaning using pandas.. even the mistakes were good to learn from.
instead of applying lambda function to convert Phone_Number column elements to string , we can also use
df['Phone_Number'] = df['Phone_Number'].astype(str)
and use dictionary as an argument to be passed inside replace method to avoid Yes becoming YYes df['Paying Customer']= df['Paying Customer'].replace({'Y':'Yes','N':'No'})
I've been struggling with Pandas a bit and this video cleared some things for me!
what frustrates me from the way my teachers would teach Pandas, their solutions are sometimes too efficient, in the sense that a student that started from zero who's taking an exam, will never be able to come up with these hyper efficient and elegant one-liners in their code. what I appreciate in your video is how you achieve the same results, but in a way that a beginner can easily remember and apply on an exam. thank you! I'll be checking out more of your videos.
Very well done! Great video. I am working on analyzing and cleaning scraped data from web and this guide is helpful, especially where you mentioned the mistakes.
Alex your are the GOAT! for real thank you for all the tutorials and your help for everyone who want's to become a data analyst1
Glad to do it! :D
This is really very important to both the beginners and pro. Kudos!!
Thank you Alex for this video on data cleaning with pandas. It is very detailed and explanatory
Great video! I enjoyed learning from you! Thanks for making things easier to understand
My fav thing to do in pandas, thanks for making tutorial.
Great Pandas data cleaning video. Thank you very much for sharing your knowledge.
Best video available on internet so far for data cleaning in Pandas. Best explanation. 😇😇
I am studying Data Collection and Data Visualization at Kings College, your channel is reccomned by our lecturers to understand data cleaning.
Thank you Alex. Your videos are very helpful. Now I can resume cleaning my data.
The video I needed to have a realistic practice in data cleaning.thanks
And I was already looking for some Pandas tutorial. Thank you, Alex, this was much needed. :)
Glad to help!
Thanks for the video. Helped a lot in understanding Pandas.
in Last_Name columns we can used replace function in order remove regular expression like ( ./-)
code:
df["Last_Name"]= df["Last_Name"].str.replace("[./_]","" ,regex= True)
OMG Thank youuuu!!! I knew someone on here had to know the answer to how to use regex lol.
Thanks
Many thanks for the dataset+code+video!!! 🔥🔥
Your work are amazing. Thank you so Much
Oh my.. I am going to watch every single video you created..
Thanks Alex, Please post more videos.
A Glorious Thank You!! Please Keep This UP!!!!
Thanks for this absolutely great video.
You are great, Alex. Your teaching skills excellent.
Thanks! 😃
Great video mam, need more this type of tutorials
Your explanation was super cool
Thank you so much, Alex. You are the Best
thank you very much. your video helped me a lot. good luck
Thank you Alex. That Lambda example is going to be very useful.
Glad to hear it! :D
man lets go,you are our hero who can not afford paid courses
Very helpful, and well explained.
Hey Alex, Thanks for the super content ...!
Amazing explanations!
After making it this far through the course over the last 2 months, looking at these last 4 videos I'm getting strong final exam vibes. Python has not felt intuitive to me at all, but I recognize its value. I guess it feels like taking Spanish 1 and having Spanish 2 tests. I'm definitely looking forward to applying what I've learned here to solidify the lessons more. I'm contracting for a company already and writing a proposal for them to transition to My SQL Server. I guess the fact that I feel overwhelmed with all the info means I'm actually learning how little I actually know, which is a good thing for growth in the long run. Rambling here, but I am incredibly thankful for the course, Alex.
Thank you for this very useful video!
Thank you soo much sir you're really a great professor 👏❤
I'm in love with ur videos
Using regular expressions for manipulating data is beneficial because it allows you to change strings as needed, especially when dealing with different types of strings.
Super Explanation Thanks
Thank you, great video!
Also, to clean the Do_Not_Contact field, one can use: df['Do_Not_Contact'] = df['Do_Not_Contact'].replace({'N': 'No', 'Y': 'Yes'})
very well explained video thank youuuu
Thank you Alex! 🙏
Really enjoyed the video
Thank you so much this awesome video
Really u fone a good job i became a big fan of u thank u so much for doing this
Yesss love these vids
Alex, I loved the Video. It have Correct Explanation. Thank you so much for your Video.
There is a Small Mistake while you are typing
#Another Way to drop null value
df.dropna(subset='Column_name',inplace = True). I hope you will notify the Error.
Thank you.
Have a Great day!
Alex i have a question regarding the part in 18:50 where you change the phone number column into string using the str() inside the lambda , can i get the same result using first df["Phone_Number"].astype() and then do the lambda ? or is there a nuance and it works only using str() ? Thanks for the great work !
Thanks a lot Alex for the video ! This was exactly what I was looking for. May I request you to try and upload video on how to write Python ETL code which uses table in a cloud database like snowflake, saves it in a csv format, transforms it and then again uploads it on snowflake. And all these steps are being captured in a log file which is in txt format !
vouching for this @Alex. It'd be really appreciated TIA
For explanation purposes, it is great.
For getting the final result, I would have done differently though
Hi Alex, idk if you will see this comment. So I was doing the same codes, and I noticed when you eliminated the characters for the phone numbers at 14:57 you also deleted the phone numbers that did not have any characters in them. You can see that at index 3 for Walter White, before he had a phone number but after he had NaN. If you can tell me how to correct it, it would be very great. I also never commented on your videos, but i like them very much, they are very good, and helpful. Thanks for everything
Not sure if you're still looking for a solution, but from some online searching, I found a solution to avoid deleting phone numbers that did not have any error/contain no characters, by adding .astype(str) before .str.replace, this seems fix the issue and the code should look something like this:
df["Phone_Number"] = df['Phone_Number'].astype(str).str.replace('[^a-zA-Z0-9]','',regex=True)
Also note you'll have to add in regex=True manually.
Maybe it's deleting as it somehow interpret whole number as non-numeric and deleting it erroneously, not 100% sure tho, still a beginner, and it might cause issue with other types of data.
Thanks for the detailed tutorial Alex. I was wondering, if i wanted to become a data scientist instead of a data analyst, would you recommend any people in the industry who I should follow? F.e is there an Alex the Data Scientist out there?😄
Thanks for this
Amazing Video
If any one is getting an error on df['Address'].str.split(",",2, expand=True), you can omit 2 and use df["Address"].str.split(",", expand=True)
I was intimidated by the Machine learning module but now I am not. Thanks a lot dude
Hey, Alex, I just Started your Pandas Tutorial, and I was waiting for Data Cleaning video, when i open my CZcams, First your Video is seen.. This is boon for me 😇🥺 Thanks, I hope you will Upload Matploib, Numpy and Many More Libraries video ❤🤗
In the future, yes :)
Thanks brother!❤
Hey alex, we don't need to take any course because you are there 😉
I am doing your bootcamp of becoming a data analyst
Do it! I try my best to bring the best free content I can :)
Great stuff ! Do a collab with Rob Mulla !
really helpful
Hey alex, could you please expand in detail about the lambda function? thank you.
which one is better for data cleaning, Pandas or Excel ?
Not an analyst (never wanted to be), but it was very interesting. Thanks!
Still Helpful Thanks
Timestamp 32:42. I simply use
#Filter out "Do_Not_Contact" == "Yes"
df[df['Do_Not_Contact']!='Yes']
great video thank you. when we did the first lambda, the reason was because lambda is faster. so why did we go against using a lambda when it was time to check if the customer can be called or not?
Great video thanks! Can’t help thinking that tools like chatGPT, github copilot al, GPT engineer can pretty much tell you how to/do this all for you so maybe I am wasting my time learning this 😅
thank you
Sir, in your opinion : Jupyter vs Pycharm? Which is better for data cleaning ?
Perfect 👍
Thank you Alex for this detailed breakdown. Just a side note for those who don't like to use loops e.g. for, while
For 31:00, you could do the following code 'df.drop(df[df['Do_Not_Contact'] == 'Y'].index, inplace=True'
I'd say that's complicating the code. You can simply do
df = df[df['Do_Not_Contact'] != "Y"]
@@LuisRivera-oc6xh i literally use this at the first time learning pandas myself
df = df.drop(df[df['Do_Not_Contact'] == 'Y'].index)
df = df.drop(df[df['Do_Not_Contact'] == ''].index)
OR
df = df[df['Do_Not_Contact'] == 'N']
For phone number why don't you convert each record into str first, and then when you apply the reg expression, you can get rid of Nan and Na all together with other stuff?
I not only survived! on 20:46 you can place AND in .replace('nan--' AND 'Na--' , ' '). Thank you 1:1
Question is it bad that i dont specify the .str in this df["Do_Not_Contact"].replace("Y", "Yes")
My only doubt is, you saw the first 20 rows and decide only \ or .. or _ could be preceding, or only "Nan" or "N/A" is only there in that row, while replacing it. What if the 50th row has "%Mike" as a name or what if "Null" is there one of the columns?? How do we deal with it. Great recap for me other than this. Thank you.
Python is so fun
Nice one Alex. Don't forget to add comments to the code! 🙂
lol for sure!
In Python what is difference between NaN,Na and blank string(' ').Please explain
Hello Alex on time of cleaning the Phone_Numder column(14:00 to 21:39 ) the code is executed.
But at the table there are no changes .
Please help me on this
you might have a newer pandas version, just add regex = True as an extra parameter:
df['Phone_Number'] = df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex=True)
THank you for the video
When trying to filter using the DNC column, Couldn't we have done
df = df[df['Do_Not_COntact'] !== 'Y']