Residual Networks (ResNet) [Physics Informed Machine Learning]

Sdílet
Vložit
  • čas přidán 27. 06. 2024
  • This video discusses Residual Networks, one of the most popular machine learning architectures that has enabled considerably deeper neural networks through jump/skip connections. This architecture mimics many of the aspects of a numerical integrator.
    This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
    %%% CHAPTERS %%%
    00:00 Intro
    01:09 Concept: Modeling the Residual
    03:26 Building Blocks
    05:59 Motivation: Deep Network Signal Loss
    07:43 Extending to Classification
    09:00 Extending to DiffEqs
    10:16 Impact of CVPR and Resnet
    12:17 Resnets and Euler Integrators
    13:34 Neural ODEs and Improved Integrators
    16:07 Outro
  • Věda a technologie

Komentáře • 19

  • @mostafasayahkarajy508
    @mostafasayahkarajy508 Před 11 dny +9

    Thank you very much for your videos. I am glad that besides the classical sources to promote science (such as books and papers), your lectures can also be found on youtube. In my opinion, Prof. Bruton is the best provider of youtube lectures and I don't want to miss any of the lectures.

  • @goodlack9093
    @goodlack9093 Před 12 dny +3

    Thank you for this content!
    Love your approach. Please never stop educating people. We all need teachers like you!:) ps Enjoying reading your book

  • @physicsanimated1623
    @physicsanimated1623 Před 11 dny +1

    Hi Steve - this is Vivek Karmarkar! Thanks for the video - great content as usual and keeps me motivated to create my own PINN content as well. Looking forward to the next video in the series and would love to talk PINN content creation with you!
    I have been thinking about presenting PINNs with ODEs as examples and its nice to contrast it with Neural ODEs - nomenclature aside, it looks like the power of the NN as universal approximators allows us to model either the flow field (Neural ODEs) or the physical field of interest (PINNs) for analysis which is pretty cool!

  • @lorisdemuth374
    @lorisdemuth374 Před 11 dny +1

    Many thanks for the extremely good videos. Really well explained and easy to understand.
    A video on "Augmented neural ODEs" would go well with "neural ODEs" 😊

  • @culturemanoftheages
    @culturemanoftheages Před 11 dny

    Excellent explanation! For those interested in LLMs residual connections are also featured in the vanilla transformer block. The idea is similar to CNN ResNets, but instead of gradually adding pixel resolution each block adds semantic "resolution" to the original embedded text input.

  • @sainissunil
    @sainissunil Před 12 dny +1

    Thank you for making this. I watched your video on Neural ODEs before I watched this. It is much easier to understand the Neural ODE video now that I have watched this.
    I would love to watch a video about the ResNet classifier idea you discuss here. If you have already done that please add a link here.
    Thanks, and this is awesome!

  • @ultrasound1459
    @ultrasound1459 Před 11 dny +1

    ResNet is literally the best thing happened in Deep Learning.

  • @saraiva407
    @saraiva407 Před 12 dny

    Thank you SO MUCH prof. Steve!! I intend to study neural networks in my graduate courses thanks to your lectures!! :D

  • @Daniboy370
    @Daniboy370 Před 11 dny +1

    You have an impressive ability to simplify complex subjects

  • @ramimohammed3132
    @ramimohammed3132 Před 12 dny

    thank u sire!

  • @Ishaheennabi
    @Ishaheennabi Před 12 dny +1

    Love from kashmir india❤❤❤

  • @davidmccabe1623
    @davidmccabe1623 Před 11 dny

    Does anyone know if transformers have superseded resnets for image classification?

    • @culturemanoftheages
      @culturemanoftheages Před 11 dny +1

      Vision transformer (ViT) architectures have been studied that outperform CNN-based approaches in some respects, but they require more training data, more resources to train, and in general yield a bulkier model than a CNN would. They also use a different information-concentrating mechanism (attention for transformers vs. convolution for CNNs), so I imagine there are certain vision applications where transformers might be preferable.

  • @HansPeter-gx9ew
    @HansPeter-gx9ew Před 5 dny

    tbh understanding his videos is very difficult, IMO he explains badly. Like 14:14 is the first more complicated part and I don't really get what it is about. I wouldn't understand ResNet from his explanation either if I had no prior knowledge about it. He just assumes that I am some expert in math and DLs

  • @cieciurka1
    @cieciurka1 Před 12 dny

    STEVE MAKE TWO. SMALLER HIGHER LIKE ARRAY ONE DIRECTION OR SYMMETRY LIKE MIRROR. FEEDBACK AND THIS 150.000ageSCIENCE.

  • @cieciurka1
    @cieciurka1 Před 12 dny

    SHAMEEEEEE🎉 Bound, border, infinity, noninfinity, natural, where is the end?! calc machine how it works, integer, costs money costs profits cons in mathematics, NOMIA! ECO? algorithm accuracy, fuzzy logic, integer 0-I. ONE BOOK NO INDIVIDUAL HERE 🎉WHEN YOU SMOOTHING GRADIENT YOU LOSING

  • @maksymriabov1356
    @maksymriabov1356 Před 11 dny

    IMHO you should speak a little faster and make less jests; for scientists watching this it wastes a time and attention.

    • @chrisnoble04
      @chrisnoble04 Před 11 dny +3

      You can always run it at 2x speed....

  • @suhailea963
    @suhailea963 Před dnem

    I am a video editor. If you need any help related to video editing you can contact me. I will share my portfolio