Richard Liaw: A Guide to Modern Hyperparameters Turning Algorithms | PyData LA 2019

Sdílet
Vložit
  • čas přidán 9. 07. 2024
  • www.pydata.org
    PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
    PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases. 00:00 Welcome!
    00:10 Help us add time stamps or captions to this video! See the description for details.
    Want to help add timestamps to our CZcams videos to help with discoverability? Find out more here: github.com/numfocus/CZcamsVi...
  • Věda a technologie

Komentáře • 3

  • @saratbhargavachinni5544

    Intuitive and easy to understand. Thanks for sharing the video.

  • @haneulkim4902
    @haneulkim4902 Před 11 měsíci

    Thanks for the great video, one thing I've came across while using rayTune was that when you load data within train_mnist function your memory intake increases as number of concurrent rayTune worker increases since each of them need to load same data. To resolve this I've created tf.data.Dataset(using petastorm which uses AWS S3 as source data), saved it then made each rayTune worker load() within train_mnist(). It seems to reduce memory however fills up disk size therefore not fully utilizing petastorm's ability to use AWS S3 as source data. So my question is, what is best practice to do parallel HPO with large datasets?