Self-attention mechanism explained | Self-attention explained | scaled dot product attention

Sdílet
Vložit
  • čas přidán 15. 05. 2024
  • Self-attention mechanism explained | Self-attention explained | self-attention in deep learning
    #ai #datascience #machinelearning
    Hello,
    My name is Aman and I am a Data Scientist.
    All amazing data science courses at the most affordable price here: www.unfolddatascience.com
    Book one on one session here(Note - These supports are chargable): docs.google.com/forms/d/1Wgle...
    Follow on Instagram: unfold_data_science
    About Unfold Data science: This channel is to help people understand the basics of data science through simple examples in an easy way. Anybody without prior knowledge of computer programming or statistics or machine learning and artificial intelligence can get an understanding of data science at a high level through this channel. The videos uploaded will not be very technical in nature and hence can be easily grasped by viewers from different backgrounds as well.
    Book recommendation for Data Science:
    Category 1 - Must Read For Every Data Scientist:
    The Elements of Statistical Learning by Trevor Hastie - amzn.to/37wMo9H
    Python Data Science Handbook - amzn.to/31UCScm
    Business Statistics By Ken Black - amzn.to/2LObAA5
    Hands-On Machine Learning with Scikit Learn, Keras, and TensorFlow by Aurelien Geron - amzn.to/3gV8sO9
    Category 2 - Overall Data Science:
    The Art of Data Science By Roger D. Peng - amzn.to/2KD75aD
    Predictive Analytics By By Eric Siegel - amzn.to/3nsQftV
    Data Science for Business By Foster Provost - amzn.to/3ajN8QZ
    Category 3 - Statistics and Mathematics:
    Naked Statistics By Charles Wheelan - amzn.to/3gXLdmp
    Practical Statistics for Data Scientist By Peter Bruce - amzn.to/37wL9Y5
    Category 4 - Machine Learning:
    Introduction to machine learning by Andreas C Muller - amzn.to/3oZ3X7T
    The Hundred Page Machine Learning Book by Andriy Burkov - amzn.to/3pdqCxJ
    Category 5 - Programming:
    The Pragmatic Programmer by David Thomas - amzn.to/2WqWXVj
    Clean Code by Robert C. Martin - amzn.to/3oYOdlt
    My Studio Setup:
    My Camera: amzn.to/3mwXI9I
    My Mic: amzn.to/34phfD0
    My Tripod: amzn.to/3r4HeJA
    My Ring Light: amzn.to/3gZz00F
    Join the Facebook group :
    groups/41022...
    Follow on medium: / amanrai77
    Follow on quora: www.quora.com/profile/Aman-Ku...
    Follow on Twitter: @unfoldds
    Watch the Introduction to Data Science full playlist here: • Data Science In 15 Min...
    Watch python for data science playlist here:
    • Python Basics For Data...
    Watch the statistics and mathematics playlist here :
    • Measures of Central Te...
    Watch End to End Implementation of a simple machine-learning model in Python here:
    • How Does Machine Learn...
    Learn Ensemble Model, Bagging, and Boosting here:
    • Introduction to Ensemb...
    Build Career in Data Science Playlist:
    • Channel updates - Unfo...
    Artificial Neural Network and Deep Learning Playlist:
    • Intuition behind neura...
    Natural language Processing playlist:
    • Natural Language Proce...
    Understanding and building a recommendation system:
    • Recommendation System ...
    Access all my codes here:
    drive.google.com/drive/folder...
    Have a different question for me? Ask me here : docs.google.com/forms/d/1ccgl...
    My Music: www.bensound.com/royalty-free...

Komentáře • 13

  • @user-kt5qu2ne7l
    @user-kt5qu2ne7l Před 17 dny +1

    Thank you for your clear explanation

  • @ajitkulkarni1702
    @ajitkulkarni1702 Před 16 dny

    Best expalinination on self attention !!!

  • @jayeshsingh116
    @jayeshsingh116 Před 16 dny

    well explained
    thank you for covering these topics

  • @AnkitGupta-rj4yy
    @AnkitGupta-rj4yy Před 15 dny

    Thank you for provide us ❤ in easy way

  • @irfanhaider3021
    @irfanhaider3021 Před 5 hodinami

    Kindly make a video on GRU layer as well.

  • @dinu9670
    @dinu9670 Před 11 dny

    You are a saviour man. Great explanation. Please keep doing these videos 🙏

  • @funwithtechnology6526
    @funwithtechnology6526 Před 10 dny

    Thank you for the very clear explanation :) . I have a small question here. In self-attention, is there a limit to the dimension of the final attention embedding space?

  • @manoj1bk
    @manoj1bk Před 16 dny

    can be used as self attention mechanism(as an embedding layer) before LSTM in the context of time series analysis?

  • @RakeshKumarSharma-nc3cj

    awesome video

  • @dhirajpatil6776
    @dhirajpatil6776 Před 16 dny

    Please made video on explanation of transformers architecture

  • @ajitkulkarni1702
    @ajitkulkarni1702 Před 16 dny

    Please make viodes on multi head attention...