Chronos: Learning the Language of Time Series with Abdul Fatir Ansari - 685

Sdílet
Vložit
  • čas přidán 16. 06. 2024
  • Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "Chronos: Learning the Language of Time Series" - arxiv.org/abs/2403.07815. Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems.
    🎧 / 🎥 Listen or watch the full episode on our page: twimlai.com/go/685.
    🔔 Subscribe to our channel for more great content just like this: czcams.com/users/twimlai?sub_confi...
    🗣️ CONNECT WITH US!
    ===============================
    Subscribe to the TWIML AI Podcast: twimlai.com/podcast/twimlai/
    Follow us on Twitter: / twimlai
    Follow us on LinkedIn: / twimlai
    Join our Slack Community: twimlai.com/community/
    Subscribe to our newsletter: twimlai.com/newsletter/
    Want to get in touch? Send us a message: twimlai.com/contact/
    📖 CHAPTERS
    ===============================
    00:00 - Introduction
    02:11 - Inspiration for Chronos
    04:30 - Overview of statistical models
    07:04 - Overfitting
    08:17 - LLMs in time series forecasting
    10:20 - Tokenization
    15:25 - Why T5?
    16:35 - Data augmentation
    25:28 - Evaluation
    27:45 - Result
    31:15 - In domain vs zero shot
    33:35 - Performance across different patterns
    36:25 - Critique of Chronos
    40:15 - Chronos in production
    41:00 - Future of Chronos
    42:00 - Conclusion
    🔗 LINKS & RESOURCES
    ===============================
    Large Language Models Are Zero-Shot Time Series Forecasters - arxiv.org/pdf/2310.07820
    Time-LLM: Time Series Forecasting by Reprogramming Large Language Models - arxiv.org/abs/2310.01728
    LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters - arxiv.org/pdf/2308.08469
    Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
    - arxiv.org/abs/2310.08278
    Unified Training of Universal Time Series Forecasting Transformers (Moirai) - arxiv.org/abs/2402.02592
    📸 Camera: amzn.to/3TQ3zsg
    🎙️Microphone: amzn.to/3t5zXeV
    🚦Lights: amzn.to/3TQlX49
    🎛️ Audio Interface: amzn.to/3TVFAIq
    🎚️ Stream Deck: amzn.to/3zzm7F5
  • Věda a technologie

Komentáře • 3

  • @syedmohammadghazi6133
    @syedmohammadghazi6133 Před 17 dny

    Hey, that's a good work...
    I've done my research on similar lines..gonna publish it end year.
    Just wanted to know have you heard about TranAD models? Although it's basically for anomaly detection, but just curious how better would it be for your use case

  • @btcoal
    @btcoal Před 21 dnem +1

    Paper link?

    • @twimlai
      @twimlai  Před 19 dny

      Hi @btcoal. Here's the paper link: arxiv.org/abs/2403.07815.