Physics Informed Neural Networks (PINNs) [Physics Informed Machine Learning]
Vložit
- čas přidán 25. 06. 2024
- This video introduces PINNs, or Physics Informed Neural Networks. PINNs are a simple modification of a neural network that adds a PDE in the loss function to promote solutions that satisfy known physics. For example, if we wish to model a fluid flow field and we know it is incompressible, we can add the divergence of the field in the loss function to drive it towards zero. This approach relies on the automatic differentiability in neural networks (i.e., backpropagation) to compute partial derivatives used in the PDE loss function.
Original PINNs paper: www.sciencedirect.com/science...
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
M. Raissi P. Perdikaris, G.E. Karniadakis
Journal of Computational Physics
Volume 378: 686-707, 2019
This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
%%% CHAPTERS %%%
00:00 Intro
01:54 PINNs: Central Concept
06:38 Advantages and Disadvantages
11:39 PINNs and Inference
15:23 Recommended Resources
19:33 Extending PINNs: Fractional PINNs
21:40 Extending PINNs: Delta PINNs
25:33 Failure Modes
29:40 PINNs & Pareto Fronts
31:57 Outro - Věda a technologie
The way of teaching is highly beneficial and outstanding. Thank you, Steven!
This is hands down one of the best videos I've seen on CZcams. Great work, keep it up!
Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.
Getting a PhD and learning from CZcams is wild 😭
@@The_Quaalude Why?
@@arnold-pdev bro is paying all that money just to learn something online for free
@@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.
I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms
Thank you, Professor.😃
Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero
Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.
Awesome video again Steve. Thanks so much.
I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.
Already filmed and in the queue :)
@@Eigensteve I'm so excited to hear this.
I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.
This is really helpful, if only you posted this sooner! Thanks
I was waiting for this, hope to see more about this subjects, thanks a lot.
fantastic! As always
Please share references only 1 Link is visible
Dude they are literally in the video. Just use google.
Also - take a look at his textbook for further reference
Nice, kinda gives me ideas to mix control theories together.
Thank you very much for the lecture. I am looking forward for your next lecture on this topic.
Great video. Thank you.
Loved the video ❤️❤️
Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.
Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.
Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.
PINNs usually require more training. A lot of attention must be given to activation functions.
excellent. thanks :)
very nice!
i knew someone would end up working on this soon. really excited to see some sophisticated applications!
cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?
This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅
Have you made a video on embedding and fitting networks for running simulation inference?
I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?
PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?
What about Kolmogorov-Arnold networks (KAN)?
Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!
I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems
I think you forgot to put the link on the description to the PyTorch example tutorials.
wonderful content, any code sample would be helpful
Is there anything that works well for chaotic systems (?)
Think about what the definition of "chaos" is, and you'll have your answer.
no audio?
What if you use KAN instead of MLP?
Sounds like the start of a research question
PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.
On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn.
Great vid tho!
Who else is high af rn⁉️
Wonder how ai is gonna use this ?
I'm sorry, what? 😁