Physics Informed Neural Networks (PINNs) [Physics Informed Machine Learning]

Sdílet
Vložit
  • čas přidán 25. 06. 2024
  • This video introduces PINNs, or Physics Informed Neural Networks. PINNs are a simple modification of a neural network that adds a PDE in the loss function to promote solutions that satisfy known physics. For example, if we wish to model a fluid flow field and we know it is incompressible, we can add the divergence of the field in the loss function to drive it towards zero. This approach relies on the automatic differentiability in neural networks (i.e., backpropagation) to compute partial derivatives used in the PDE loss function.
    Original PINNs paper: www.sciencedirect.com/science...
    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
    M. Raissi P. Perdikaris, G.E. Karniadakis
    Journal of Computational Physics
    Volume 378: 686-707, 2019
    This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
    %%% CHAPTERS %%%
    00:00 Intro
    01:54 PINNs: Central Concept
    06:38 Advantages and Disadvantages
    11:39 PINNs and Inference
    15:23 Recommended Resources
    19:33 Extending PINNs: Fractional PINNs
    21:40 Extending PINNs: Delta PINNs
    25:33 Failure Modes
    29:40 PINNs & Pareto Fronts
    31:57 Outro
  • Věda a technologie

Komentáře • 58

  • @rehankhan-gn2jr
    @rehankhan-gn2jr Před měsícem +16

    The way of teaching is highly beneficial and outstanding. Thank you, Steven!

  • @alessandrobeatini1882
    @alessandrobeatini1882 Před měsícem +7

    This is hands down one of the best videos I've seen on CZcams. Great work, keep it up!

  • @jiaminxu7275
    @jiaminxu7275 Před 26 dny +13

    Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.

    • @The_Quaalude
      @The_Quaalude Před 25 dny +4

      Getting a PhD and learning from CZcams is wild 😭

    • @arnold-pdev
      @arnold-pdev Před 22 dny +1

      ​@@The_Quaalude Why?

    • @The_Quaalude
      @The_Quaalude Před 22 dny +1

      @@arnold-pdev bro is paying all that money just to learn something online for free

    • @kaihsiangju
      @kaihsiangju Před 22 dny +3

      @@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.

    • @codybarton2090
      @codybarton2090 Před 18 dny +1

      I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms

  • @abhisheksaini5217
    @abhisheksaini5217 Před měsícem +4

    Thank you, Professor.😃

  • @code2compass
    @code2compass Před 27 dny +4

    Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero

  • @markseagraves5486
    @markseagraves5486 Před 25 dny +1

    Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.

  • @anthonymiller6234
    @anthonymiller6234 Před 25 dny

    Awesome video again Steve. Thanks so much.

  • @ryansoklaski8242
    @ryansoklaski8242 Před 27 dny +7

    I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.

    • @Eigensteve
      @Eigensteve  Před 27 dny +7

      Already filmed and in the queue :)

    • @ryansoklaski8242
      @ryansoklaski8242 Před 27 dny +1

      @@Eigensteve I'm so excited to hear this.
      I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.

  • @THEPAGMAN
    @THEPAGMAN Před 26 dny

    This is really helpful, if only you posted this sooner! Thanks

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Před 26 dny

    I was waiting for this, hope to see more about this subjects, thanks a lot.

  • @Anorve
    @Anorve Před 26 dny

    fantastic! As always

  • @mithundeshmukh8
    @mithundeshmukh8 Před 27 dny +22

    Please share references only 1 Link is visible

    • @tillsteh7273
      @tillsteh7273 Před 24 dny +3

      Dude they are literally in the video. Just use google.

    • @DrakenRS78
      @DrakenRS78 Před 8 dny

      Also - take a look at his textbook for further reference

  • @reversetransistor4129
    @reversetransistor4129 Před 27 dny +2

    Nice, kinda gives me ideas to mix control theories together.

  • @mostafasayahkarajy508
    @mostafasayahkarajy508 Před 27 dny

    Thank you very much for the lecture. I am looking forward for your next lecture on this topic.

  • @victormurphy3511
    @victormurphy3511 Před 27 dny

    Great video. Thank you.

  • @codybarton2090
    @codybarton2090 Před 18 dny +1

    Loved the video ❤️❤️

  • @blacklabelmansociety
    @blacklabelmansociety Před 27 dny

    Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.

  • @nafisamehtaj8779
    @nafisamehtaj8779 Před 14 dny

    Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.

  • @drozdchannel8707
    @drozdchannel8707 Před 23 dny

    Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.

  • @Obbe79
    @Obbe79 Před 13 dny

    PINNs usually require more training. A lot of attention must be given to activation functions.

  • @MyrLin8
    @MyrLin8 Před 27 dny

    excellent. thanks :)

  • @user-qp2ps1bk3b
    @user-qp2ps1bk3b Před 27 dny

    very nice!

  • @caseybackes
    @caseybackes Před 27 dny

    i knew someone would end up working on this soon. really excited to see some sophisticated applications!

  • @moisesbessalle
    @moisesbessalle Před 27 dny +1

    cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?

  • @calvinholt6364
    @calvinholt6364 Před 24 dny

    This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅

  • @sedenions
    @sedenions Před 26 dny

    Have you made a video on embedding and fitting networks for running simulation inference?

  • @MariaHeger-tb6cv
    @MariaHeger-tb6cv Před 20 dny

    I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?

  • @AndrewConsroe
    @AndrewConsroe Před 26 dny

    PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?

  • @alexanderskusnov5119
    @alexanderskusnov5119 Před 27 dny

    What about Kolmogorov-Arnold networks (KAN)?

  • @thepanzymancan
    @thepanzymancan Před 27 dny

    Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!

    • @Jononor
      @Jononor Před 26 dny

      I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems

  • @alshahriarbd
    @alshahriarbd Před 26 dny

    I think you forgot to put the link on the description to the PyTorch example tutorials.

  • @muthukamalan.m6316
    @muthukamalan.m6316 Před 20 dny

    wonderful content, any code sample would be helpful

  • @mintakan003
    @mintakan003 Před 27 dny

    Is there anything that works well for chaotic systems (?)

    • @arnold-pdev
      @arnold-pdev Před 22 dny

      Think about what the definition of "chaos" is, and you'll have your answer.

  • @cfddoc
    @cfddoc Před 27 dny

    no audio?

  • @notu483
    @notu483 Před 23 dny

    What if you use KAN instead of MLP?

    • @arnold-pdev
      @arnold-pdev Před 22 dny

      Sounds like the start of a research question

  • @arnold-pdev
    @arnold-pdev Před 22 dny +1

    PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.

    • @arnold-pdev
      @arnold-pdev Před 22 dny

      On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn.
      Great vid tho!

  • @The_Quaalude
    @The_Quaalude Před 25 dny +4

    Who else is high af rn⁉️

  • @codybarton2090
    @codybarton2090 Před 18 dny

    Wonder how ai is gonna use this ?

  • @alexroberts6416
    @alexroberts6416 Před 26 dny

    I'm sorry, what? 😁