Find and Find_All | Web Scraping in Python

Sdílet
Vložit
  • čas přidán 5. 09. 2024

Komentáře • 29

  • @babyniq08
    @babyniq08 Před rokem +44

    I used to binge watch Netflix, now I'm binge watching all your videos. Thank you, Alex for all your amazing videos!

  • @user-zb2zi1ty7f
    @user-zb2zi1ty7f Před 2 dny

    I remember watching this a few years ago when starting my journey, it was the best tutorial I have watched ever since, I am currently a senior engineer

  • @shahrukhahmad4127
    @shahrukhahmad4127 Před 10 měsíci +6

    I tried learning web scrapping atleast 5 time and failed everytime. But you made everything simple and handy, please please its a request from my side to resume this playlist and teach basics to advanced scrapping using python. I cant be able to learn without you, thank you inadvance and waiting for your more videos in same playlist Alex.

  • @user-oy5xp9hd6q
    @user-oy5xp9hd6q Před měsícem +1

    you don't need to use find function to get text, just try soup.find_all(arguments...)[x].text.strip() . You can write 0,1,2,3.... for x depending on which data you want. for example in 10:15 for x=1 the data text must be "Year". because 1 is the second index in python after first index 0

  • @user-ys3td1mt3b
    @user-ys3td1mt3b Před 8 měsíci +2

    I am pretty new to data analysis and I was working on a project where I would need to scrape data from a website and this tutorial has been so helpful! I spent hours trying to figure it out and the other tutorials on CZcams don't explain anything or skip steps and so it's hard to learn and personalize it for your own project.
    This however was detailed and straight to the point! Thank you so much. You're a lifesaver!

  • @franciscoflor6125
    @franciscoflor6125 Před rokem +4

    You are the best, your videos have really helped me a lot.
    But this series of Web Scraping videos has been like you were reading my mind. I was thinking of doing a project on my own, but the only way to get the database is through Web Scraping.
    Waiting for the next video, one of the questions I have is the procedure to continue if I want to extract information from the hockey teams but from page 2,3, etc.

  • @ENTJ616
    @ENTJ616 Před rokem +2

    Mate, you are out of this world.

  • @katcirce
    @katcirce Před měsícem

    Thank you for this! Awesome starting point for my nlp project!

  • @ShivaSunkaranam-qx3jf
    @ShivaSunkaranam-qx3jf Před 5 měsíci +1

    if i type soup. Find('div') .. nothing displays. But thats available on script

  • @nnamdiLdavid
    @nnamdiLdavid Před 9 měsíci

    Thanks for all you do Alex. Can you be so kind to continue this series, especially for advanced scrapping, like scrapping from unstructured data etc

  • @ArisingProgram
    @ArisingProgram Před 5 měsíci +1

    Hey Alex,
    I'm trying to grab text that is randomly generated from Random Word Generator website for my hangman project. Problem is that the text I grab isn't displayed in HTML it's always displayed as loading... What new techniques can you teach us on how to grab this data thanks!

  • @jmc1849
    @jmc1849 Před 6 měsíci

    Hi Alex (as if!)
    Thanks for all the content

  • @kaliportis
    @kaliportis Před rokem +1

    Hello, I commented on one of your previous videos enquiring about the offer you had made in one of your "How to Build a Resume" videos, concerning resume reviews. I completely understand if that is no longer the case, considering that video was 3 years ago, but if you still are reviewing resumes I would to send mine to you. Have a nice day and congratulations on hitting 500k.

  • @kajal648
    @kajal648 Před 7 měsíci

    Thank you so much sir I was caught up in a problem but I was able to solve after watching this video.

  • @chu1452
    @chu1452 Před rokem

    as a Informatics Engineering graduate, this is easier to me to understand since we've learnt html back then

  • @meryemOuyouss2002
    @meryemOuyouss2002 Před 9 měsíci

    Thank you ,I also finished this playlist

  • @LavanyaGopal-py6jd
    @LavanyaGopal-py6jd Před 4 měsíci

    Hello, thank you so much for this wonderful tutorial. However, I have one doubt that needs clarifying. So I tried this code out with the same set of codes and Url you have used but there seems to be a problem in this line -> print(Soup.find_all('p',class_="lead")). the output for this line shows [ ] .. which isn't the paragraph from the website. How do I rectify this problem? also, I use IDLE for Python. Once again your videos are awesome and I hope you continue making more great coding content.

  • @mxdigitalmediamarketplace
    @mxdigitalmediamarketplace Před 7 měsíci

    Hello, thank you for your tutorial, great info. What editor do you use?

  • @Kaura_Victor
    @Kaura_Victor Před 5 měsíci

    Thanks, Alex!

  • @DeltaXML_Ltd
    @DeltaXML_Ltd Před rokem

    Interesting video, keep it up!

  • @monsieurm2904
    @monsieurm2904 Před 9 měsíci

    Where we can find the same notebooks page you use during all the video ? :)

  • @rockcaesarpaper291
    @rockcaesarpaper291 Před rokem

  • @Syrviuss
    @Syrviuss Před rokem

    Is it work only with static pages? Not like amazon or any shops ? There are some problems with past toturial when we try make Amazone Web Screping Using Python, how can we know the differences ? Thank for all your videos ;)

  • @geoffreycg5650
    @geoffreycg5650 Před 7 měsíci

    Is there a next video in the series?

  • @elphasluyuku4167
    @elphasluyuku4167 Před rokem

    Hey guys i am getting 'SSLCertVerificationError' can anyone kindly help me resolve this?

    • @vahidmehdizade5781
      @vahidmehdizade5781 Před 11 měsíci +2

      You can fix this with these lines of code. It typically occurs because there is an issue with the SSL certificate verification during an HTTPS connection. When the SSL certificate of the remote server cannot be verified.
      requests.packages.urllib3.disable_warnings()
      page = requests.get(url, verify=False)