Python coding - How to access the Reddit API tutorial

Sdílet
Vložit
  • čas přidán 28. 07. 2020
  • In this video, I take you through how to access the Reddit API with Python to collect a subreddits moderator list and then collect each moderators post history to show the time of day they are most / least active.
    Curl to Python request website I promised - curl.trillworks.com/
    If you found this video helpful please consider sharing it! and if you haven't already subscribed, I would love to have you as a subscriber.
    Thanks team!
    Adam

Komentáře • 51

  • @MakeDataUseful
    @MakeDataUseful  Před 4 lety +4

    Hope this one was helpful team! I am a little afraid to share it on /r/learnpython 😂 It would be awesome if you did share it around 👍 I am hopeful i will get to my 1,000 subscribers by the end of 2020!

  • @JohnBrute
    @JohnBrute Před 3 lety +5

    I love that when an error pops up, you troubleshoot the error and show how you troubleshoot! It would be so easy for you to hide the error by doing some editing. I commend your transparency when doing these tutorials.

  • @sarcasmasaservice
    @sarcasmasaservice Před 3 lety +6

    I know I'm late to this party but I would be very interested in seeing a video where you extend this example using a Python visualization library. 👍

  • @naughtymonkeysnet
    @naughtymonkeysnet Před 4 lety +3

    Thanks, this is great to see. Helpful to see someone familiar with Python talk through their though process as they tackle a task.

    • @MakeDataUseful
      @MakeDataUseful  Před 4 lety

      Thanks! I think learning by doing is the best way to learn.

  • @KeiraraVT
    @KeiraraVT Před rokem

    the utc reminded me of when i used it to convert the current utc into every time zone. This way I could see what time it was in au for some of my gamer buddies, or UK, etc

  • @bozok1903
    @bozok1903 Před 3 lety

    Another great tutorial. I love the way you teach. Please extend this series with data visualization tutorials.

  • @cozyrain410
    @cozyrain410 Před 4 lety +2

    Very helpful video. Thank you. I struggle with api usage in general, this helps.

  • @nerooderschvank7729
    @nerooderschvank7729 Před 3 lety

    Very useful one and thanks for making this video!

  • @stijnderuijter142
    @stijnderuijter142 Před 3 lety

    Great video, many thanks!

  • @WisdomShortvids
    @WisdomShortvids Před 3 lety

    excellent mate really good tutorial thanks

  • @matthewmcconaghy-shanley9673

    Subbed and thanks man

  • @oscarmartinezbeltran
    @oscarmartinezbeltran Před 3 lety

    We are missing your great tutorials !!!!!!!
    Dont let your channel die, please !!
    You make great content. Can we follow you in other platforms?

  • @oscarmartinezbeltran
    @oscarmartinezbeltran Před 3 lety +1

    good stuff bro
    keep it up!!

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety

      Thanks Oscar! Appreciate the feedback. Some new stuff coming soon!

  • @velvetcasuat
    @velvetcasuat Před 4 lety +2

    Hey Adam you should put the scripts on github or maybe even your own site or something ... some will find them useful.

    • @MakeDataUseful
      @MakeDataUseful  Před 4 lety +1

      Good thinking! I'll find a home for them so we can share and improve them as a community!

  • @lamaali1414
    @lamaali1414 Před 3 lety +1

    Thank you very much for the tutorial!
    I'd like to ask about the 1000 item limit in Reddit API, is there a way to get more than 1000 submission using PRAW ??

  • @RobotBoyZzz
    @RobotBoyZzz Před 4 lety

    Good video! More optimal approach when working with dates would probably be doing that when we've already parsed all the values. Like: df['hour'] = pd.to_datetime(df['activity_utc']).dt.hour. Your way is more generic though.

    • @MakeDataUseful
      @MakeDataUseful  Před 4 lety

      Oh good one Zaur! I like that approach because we aren't shipping around extra data in our loop :) Thanks for sharing!

  • @axvex595
    @axvex595 Před 3 lety +1

    do you mind specifying the headers you applied to the get request?

  • @rafidrahman8654
    @rafidrahman8654 Před 4 lety +3

    Great video! On your next project you could try building a Instagram bot which does some specific activities. Moreover, I hope you can create a group (Facebook, discord etc) where our community can connect and grow together.

    • @MakeDataUseful
      @MakeDataUseful  Před 4 lety +1

      Love that idea! I am so humbled by how many people are engaged in my little channel already!

    • @Epipedobatideo
      @Epipedobatideo Před 4 lety +1

      Great idea!

  • @alenjose3903
    @alenjose3903 Před 3 lety

    what is the other way to get the headers, without adding the .json at the end of the URL

  • @barefootalex
    @barefootalex Před 3 lety +1

    Thank you for this valuable content. As I am working through your tutorial, I do have one question. When I copy cURL from moderators.json and paste the code to curl.trillworks.com, no output is generated. I have tried to troubleshoot the issue for awhile (tried new browser, rewatched and recreated your steps, analyzed code) and I have no idea what I am missing. It's as if the converter site is no longer active. Any suggestions on how to get the Python request? I appreciate the feedback. Cheers!

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +1

      That's very strange! This could be a number of things like poorly formatted curl command or a bug in the site. The good news is you have the curl command so we will be able to create the Python Request manually. Feel free to drop the curl in a comment here or let me know if you would like a small followup video sharing how to manually convert curl to Python requests.

    • @barefootalex
      @barefootalex Před 3 lety

      @@MakeDataUseful MUCH respect for your fast reply! Thanks for the willingness to help debug. Here is the cURL command i'm pasting in curl.trillworks.com:
      curl 'www.reddit.com/r/learnpython/about/moderators.json' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.9' -H 'upgrade-insecure-requests: 1' -H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36' -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' -H 'cache-control: max-age=0' -H 'authority: www.reddit.com' -H 'cookie: loid=00000000008poyaue5.2.1604249871507.Z0FBQUFBQmZudWtQMENaaE1xWjhfaVVmZWMwVWxDX2ZxNG5WYVl3SVZwQldNTGtEVXNnMm9ub2VqQWZQa3p3TmU4b19xVWtzblNPVHR0ZndtUFdPMENBUVhZdW9PTUdGWGM3V1c5bnhfcG5qMmdTRjB6MGJydkM1N1l4a1l2M2hkX0E5aXIzZDR3dXI; edgebucket=ub5uvrqXL1qnq8FFjy; recent_srs=t5_2r8ot; pc=sm; d2_token=3.2ce74b749d3264187088cb5727f2dc67c9bf3064708eabfe0c8716ed59958a9e.eyJhY2Nlc3NUb2tlbiI6Ii1mWXZNaEJ6TnJDeTQwTlBjSkNXWUNTRHNTbFEiLCJleHBpcmVzIjoiMjAyMC0xMS0wMVQxNzo1ODowNS4wMDBaIiwibG9nZ2VkT3V0Ijp0cnVlLCJzY29wZXMiOlsiKiIsImVtYWlsIl19; aasd=1%7C1604249891358; __aaxsc=2; __gads=ID=c184aed2d0c33de2:T=1604249892:S=ALNI_MaUKh1VOHAtbOuf4wP61wUvfVy16w; session_tracker=7bm0ubwHCCV7JLvQ8R.0.1604249901712.Z0FBQUFBQmZudWt0SlhZaWFsQzlyRlhEUmxHV0RkLVBrN2IzQ2pmZDlvSWZCejUyeEQtVVlHMTJTZGNEXzFWNW5wNFBTSWJPbUhLYmdxMkZRTVVkbXRjWW9NOUcyUGZBS21LZm05UFRPbzhTR1hRZ2U3NHNYUkhmajVrNGlvcktHNUMxOVhiTGJWaEE; initref=google.com' --compressed
      Also, if you are up for it, a video sharing how to manually convert curl to Python requests would be truly terrific. Thanks a ton!

    • @barefootalex
      @barefootalex Před 3 lety

      Here is a screen share of me attempting the process:
      www.loom.com/share/a479a42516f44378baad4bc0a8e66e7e

    • @xycia
      @xycia Před 2 lety

      use the "Copy all as cURL (bash).
      works for me now.

  • @fletchkd09
    @fletchkd09 Před 10 měsíci

    great tutor but I followed step by step and the Curl converter gives an error in reading 'split'. I'm totaly new to python just trying to folllow along, but many thanks for the vids.

  • @bartproffitt5240
    @bartproffitt5240 Před 3 lety

    i got a 403 error right out of the gate... was surprised, i've had trouble before with the vpn i use and requests before, so i tried turning off my vpn as well and still got a 403; has anything changed with the website?

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety

      Hey maybe! It has been a little while since I uploaded that one. A lot of folks in the comments has suggest the more official route and to use PRAW pypi.org/project/praw/ check it out and let me know how you go!

  • @syedhyder5630
    @syedhyder5630 Před 3 lety

    Sir. I have a doubt. How do you figure out that we can add .json at end . How did you figure out that 'after' keyword would give next 25 comments. Where did you learn all this stuff sir. Please help me. I also want to learn deep in webscraping

  • @rishabhkothari1763
    @rishabhkothari1763 Před 3 lety +2

    Hey! Why didn't you import Praw at the Start as it is the official Reddit API using Python
    Thanks in Advance! For answering my Query

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +1

      Hey thanks for the comment! I think there is still an opportunity to put out a video on Praw but at the time I wanted the community to get a little bit more of a general idea of how you would go about accessing an API in general and interpreting the results using things like requests. Plenty more videos to come!

  • @axvex595
    @axvex595 Před 3 lety

    ".json()"
    around 3:08
    isn't working for me, any ideas?

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +1

      Hey there, what sort of error message are you getting? What does the output look like if you used .text instead?

    • @axvex595
      @axvex595 Před 3 lety +2

      @@MakeDataUseful nvm I was requesting the base URL without appending ".json" to the end of it, sorry to bother you, and thanks for the help.

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +1

      @@axvex595 hey no stress! I get errors all the time! It's all how we learn 😊

  • @velvetcasuat
    @velvetcasuat Před 4 lety

    No views :(

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety

      There one or two now :)

    • @velvetcasuat
      @velvetcasuat Před 3 lety

      @@MakeDataUseful Yep ... it looks like you are going to reach 1000 subs soon :D . Great job.

  • @axvex595
    @axvex595 Před 3 lety

    This doesn't make any sense, he's using the requests module and not the reddit api.

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +2

      Hey Ax, thanks for watching my video and your feedback! A couple of people have pointed out the Python wrapper for the Reddit API. The approach taken in the video still uses Reddit's endpoint to obtain the JSON payload using the requests library.

    • @axvex595
      @axvex595 Před 3 lety +1

      @@MakeDataUseful thanks for the insanely quick reply!

    • @MakeDataUseful
      @MakeDataUseful  Před 3 lety +2

      @@axvex595 Hey you took the time to drop a comment, it's the least I can do 🤙🤙

  • @Epipedobatideo
    @Epipedobatideo Před 4 lety +2

    Another great video! This is huge help to me, particularly because you talk your process
    If you could do the same while making some seaborn/matplotlib analyses in the same way that would be fantastic!
    Thanks for the videos!

    • @MakeDataUseful
      @MakeDataUseful  Před 4 lety +1

      Thanks for the great feedback Bruno, much appreciated! Next vid will extend this analysis into visualisation!