How To Scrape Reddit & Automatically Label Data For NLP Projects | Reddit API Tutorial
Vložit
- čas přidán 23. 07. 2024
- In this tutorial I show you how to scrape reddit with the reddit API and automatically label the data for NLP projects. We use PRAW (Python Reddit API Wrapper) to download data and nltk to do sentiment classification and assign positive, negative, and neutral labels.
Get my Free NumPy Handbook:
www.python-engineer.com/numpy...
✅ Write cleaner code with Sourcery, instant refactoring suggestions in VS Code & PyCharm: sourcery.ai/?... *
⭐ Join Our Discord : / discord
📓 ML Notebooks available on Patreon:
/ patrickloeber
If you enjoyed this video, please subscribe to the channel:
▶️ : / @patloeber
Resources:
Reddit API Setup: / apps
PRAW: praw.readthedocs.io/
PRAW submission docs: praw.readthedocs.io/en/latest...
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🖥️ Website: www.python-engineer.com
🐦 Twitter - / patloeber
✉️ Newsletter - www.python-engineer.com/newsl...
📸 Instagram - / patloeber
🦾 Discord: / discord
▶️ Subscribe: / @patloeber
~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~
🅿 Patreon - / patrickloeber
#Python
Timeline:
00:00 - Introduction
01:20 - Part 1: Reddit API
07:43 - Part 2: Label The Data
----------------------------------------------------------------------------------------------------------
* This is an affiliate link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
MAGNIFICENT! I just needed the first part, getting the post titles!, thank you man!
You are certainly one of my favorite Python Masters. I really needed to learn how to do this for stocks. Thank you sooooooo much! You are AWESOME!!!!!
Glad you like it :)
I looked all around today and I couldn't find how to search a subreddit by a key word like "PLTR", get the results and use the NLTK library. If anyone can help, I would appreciate it.
This is really awsome thank you for taking the time to put this together.
Thanks! Definitely, do more of these API consumption/analysis videos.
Ok :)
wowwww easiest tutorial to follow by FAR. thank you!!!!
woooow amazing. You save my life. very useful. Thanks a lot! :)
Thanks for doing this! I have been wanting to scrape reddit for a while as exploratory analysis
hope you like it :)
thank you very much, you save my life, my dissertation for my master degree.
Great video and well explained! How do you scrape ALL the posts for a certain time period? I am looking a small subreddit and require a lot of data.
This is exactly what I wanted. I would like to know what modifications do I have to make in order to get the headlines with the flair as well
thank you for this wonderful video. but how did you get the url used in the beginning
great video, i am trying to get the historical daily number of members on a subreddit. Is it possible using praw?
Thank you for this video. This is really helpful. I am trying to get data for a particular time period (March 2020- November 2020). Can you please tell me how to write the code for this?
Thanks so much for your video. Will you share the codes in github or somewhere?
I have a project I could use the help of someone of your caliber with. I want to determine the five stocks mentioned most frequently on Reddit's WallStreetBets page on a given day. from January 2022-August 2022 (I have the CSV file for this). After that I want to take the five most commonly mentioned stocks based on number of days in the top 5 from the aforementioned analysis. I would like to plot the number of mentions of the given stock per day against its stock price for the designated time frame. Any help you can offer would be greatly apprecaited.
Could you do a tutorial for mining historical data as well? thank you
Hey, danke für das Video :) Habe unten lesen können, dass du aus Deutschland bist. Ich hätte da mal eine Frage und zwar ist es auch möglich über LDA kommende Textnachrichten in Themengebiete zuzuordnen ?
Wow push and praw!
Yes, I woul love if you publish a video on a complete project.
Thank you.
Ok 👌🏻
Do you know how to scrape in a specified time period so I can compare sentiment towards a stock within r/wallstreetbets or r/investments against the historical stock price of the same period
Is it a way to automatically scrape any new posts in a subreddit? (without having to re-run program)
Great video! Quick question, how to scrape the historical headlines with date stamp?
The UTC attribute will give you the Unix Timestamp, then you just have to convert it. Getting historical headlines may be a little trickier, as the PRAW API allows you to iterate through the following "submission" types: controversial, gilded, hot, new, rising, top.
How do I scrape comments from reddit posts?
Why the user_agent is not "Example"?
great tutorial! Thanks a lot! is there any opportunity to do the same with twiiter data?
I already have 2 tutorials using the twitter API (tensorflow NLP and flask Twitter bot). Maybe you can apply the knowledge from these videos here
@@patloeber nice! apparently i just missed them! Danke!
how can i get this code?
Maybe you can scrape the subreddit wallstreetbets :D
Good idea
@@patloeber Scrape to see the stocks that are rising in popularity! haha
why is it always 401?
I"m using python 3.9,so older vesion may differ for my below comment.Thanks
First!
Are you german?
Yes I am
Just went to that politics subreddit. It's laughably bias. Thanks for the tutorial.
Haha yeah
Please import followings
import matplotlib.pyplot as plt
import seaborn as sns
nltk.download('vader_lexicon')
Use from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer as SIA in place of from nltk.sentiment.vader import SentimentIntensityAnalyser as SIA
Please suggest witdth =100 is showig error as "width' is an invalid keyword argument for print()"