Video není dostupné.
Omlouváme se.
What is ergodicity? - Alex Adamou
Vložit
- čas přidán 15. 08. 2024
- Alex Adamou of the London Mathematical Laboratory (LML) gives a simple definition of ergodicity and explains the importance of this under-appreciated scientific concept. The talk is in three parts:
- a basic definition of ergodicity in terms of stochastic processes;
- how ergodicity arose historically and why it matters to scientists today;
- examples of ergodic and non-ergodic processes.
This talk is part of the Ergodicity Economics (EE) research program at LML. The program is a redevelopment of economic theory without the foundational, and often flawed, assumption that expectation values and time averages of economic observables are equal. By focusing on physically relevant averages, many open problems in economics find natural solutions.
Find out more at the EE portal www.ergodicityeconomics.com
Big up-vote for clarity of exposition: such an underrated property, yet so obviously helpful. It actually takes a lot of work to achieve this, so chapeau to you Alex.
Many thanks!
Absolutely excellent! Such clarity brought to an overcomplicated topic. Can't wait for more videos.
Loved the video - if only CZcams were to fill my feed with this kind of material ...
Brilliant! Finally a REAL mathematician explained this ambiguous and difficult feature in the way as it's supposed to be explained..)
p.s.: pitch and calmness during this talk are also brilliant..)
@alexdamou7039 Thank you, very clear explanation. About the example at time 12:25, it is obvious it's not ergodic because the transformation is not measure preserving. In other words, in each iteration you are changing the probabilities of red and green balls. The first condition for a transformation to be ergodic is to be measure-preserving.
This extraordinarily fascinating. Please keep educating us. This is so exciting!!
This is a fantastic video. Looking forward to other videos...
Amazing content, so straightforward to follow this complicated topic. Many thanks
It is a very enlightening video!
Thank you! I'm glad you found it useful.
Once I have understood something I try to explain it as clear as possible, not only to check my own understanding but also because I have run into so many papers and books where things were "explained" in a way that it raised more questions than it gave answers... Thanks for this video. Appreciated.
This is simply great. That"s how teaching sould be. Thank you!
This cleared up so many concepts and was extremely informative. Thank you!
This could be the clearest introduction of the topic. Thanks
Amazing explanation and great sound quality. Many thanks.
That was an excellent lesson. Thank you!
Thank you, too!
I really need a good Spanish translation of this content, hopefully a hero will emerge who creates it for this channel
Great video. One consideration: The model at minute 13.15 should be A LOT more famous than it is.
Call this "an economy". It is primed for growth (a good decision will make you earn more than a bad decision will cost you) and indeed it does grow exponentially
But it's all driven by a minority of "repeatedly lucky" that bring the average up while the typical curve goes exponentially to zero!!!!!!
Because even in this growth bias everyone's one bad decision away from having nothing.
Now, if one adds an if statement to the model, where any curve below 1 would automatically get added a +1
Nearly all curves would grow exponentially
Moral of the story....
Great video. I’m very interested in the applications of these theories to insurance problems. Looking forward to seeing more of them.
We have a paper on that and I am sure we will do a video about it. For some reason I can't post the link here but you can search for "insurance makes wealth grow faster".
@@alexadamou7039 thanks Alex. Will search for that.
A lot actuarial modelling for insurance is predicated on utility theory and assuming that the data is i.i.d. A lot of these assumptions fall over in reality. I’m looking forward to learning more about ergodocity and how it might impact typical models used in the same. Cheers.
@@stooforthecat Some exponentially decaying autocorrelation function is a better alternative I suppose or some correlations with power-law memory kernel..Do we see generalized functions like Mittag-Leffler, Fox - H or M - Wright functions in insurance mathematics or some sort of reduced-form intensity based models of hazard rates, point process modelling framework .
Cool. This is precise and clear. Thank you!
Great video
Hands down the best video of 2022 so far! It is so well explained!!
best 3 am find ‼️‼️
This is amazing. I understood this, and appreciate such clear explanation.
This is superb.
You could say it's an above average explanation (pardon the pun).
Underrated content. Beautiful.
Brilliant and easy-to-understand explanation! Thanks for sharing such an excellent video!
really great explanation and unexpectedly informative
so clear thank you so much 🤩🤩🤩
Meilleur video Sur tout l’internet! BRAVO mon vieux!!!!
As somebody who learns by example, you could do literally almost the same video over two or three times with different real world examples to help the concept sink in....
These ideas are so ridiculously rich that I found myself getting distracted at every moment with all the different possibilities -- Is statistical mechanics really properly laid out? What types of systems violate underlying ergodicity assumptions and have to be handled with different mathematics? What does this do in quality analysis of frequentist probability studies, if we evaluate if the cross-sectional sample is it truly the same as the longitudinal average (space and time average is)
it's very rich!!
hope to see many more of these videos!.
Amazing video. He had me captivated till the end.
Thanks a lot!
Only one word for the explanation: Wowww.. Great
Incredibly clear and well-illustrated! Well done!
One small remark only: it is a bit distracting you are not looking at the camera. I realize this is easier said than done, as you have a script, but still :)
Thanks for sharing!
This video is great! How often are you going to be releasing new materials, guys? Looking forward to seeing the next ones!
We've only just started and this is all quite experimental, so I can't give you a precise answer. However, I would guess 1-2 videos a month of this length covering key themes in EE, and maybe some shorter ones on single results.
@@alexadamou7039 This is wonderful. Already looking forward to your next videos! Thanks for making the video.
@@alexadamou7039 Please discuss Johnson-Nyquist theorem, fluctuation-disspation relations, Chapman-Kolmogorov type Markov integrals...their generalizations, Master equations, Fokker-Planck equations and their fractional realizations while we have the pleasure to drink in all the spoon fed knowledge in flashy-bangy and whizzy platforms
extremely helpful video, thank you!
Subscribed! Thank you for a great intuition boost on the topic.
Terrific!
Thanks, this was an EXCELLENT video!
just vov great explanation
How is this connected to Markov chain? Specifically to MCMC algorithms?
Clear as a bell. Thank you 😊 🙏
Amazing!Thanks for sharing your knowledge!
Very interesting video. Tx for posting
What an awesome video very well explained!
Great explanation
absolutely fantastic!!
This is amazing!! Thank you
That last question at the end really perplexed me. Could someone provide with an answer? I can’t decipher which one is ensemble average and which one is time average…
You are doing god's work sir.
Great !! Thank you !
Another up-vote for the high information quality of the comments. After only 40 of them, I have had all my questions answered. Thank you all
HOLY SHIT I HAVE NEVER BEEN CONFUSED IN MY LIFE and I can follow Policy Improvement Theorem, I enjoyed the video though. Going to keep at it until I get it
Sorry to bother. The example on the green and red balls why is it true that the space average equals 1/2? thanks for your time!
The ball fraction converges to a uniform random variable from 0 to 1 (see e.g. sci-hub.se/10.2307/30211982) which has expectation value 1/2. More simply, we can appeal to symmetry: the ball fraction starts at 1/2 and every trajectory has an equally likely trajectory which is its reflection in the line x=1/2. Therefore, the ensemble average must be 1/2.
14:18 If read line is an aggregate of all other lines, how does it goes up while each path is decaying? Can somebody explain? It seems like overall wealth is decreasing, even in comparison with previous steps
Really beautiful explanation. I would love to explore the code of this, is there a course that would help me with simulations ?
Shouldn't the "Random multiplicative growth" graph (14:20) have at least one of the time-averages be higher than the ensemble-average?
You need an exponentially growing number of trajectories to see one of the atypically large ones that lift the ensemble average.
@@alexadamou7039 I don't understand how, after round 600 for example, all of the trajectories can be negative while the ensemble is positive. If it is the case that what you are showing is just a sample of trajectories which make up the ensemble, my point is that it would be more clear for the viewer if at least one is above the ensemble. It could also be that I am still misunderstanding.
@@timcieplowski2523 I don't think you are misunderstanding. As I say in the video, the ensemble average is in the limit of infinitely many trajectories. After long enough time, a typical *finite* sample of trajectories, such as the sample shown, will all lie below the ensemble average. To see one atypically lucky trajectory near the ensemble average by the end of the simulation, I would need to have used an exponentially large sample. The message I'm trying to convey is that the ensemble average is not at all reflective of typical long-time behaviour in this model.
@@alexadamou7039 I see. I was getting hung-up thinking that the ensemble slope should be an average of the sample trajectories seen. Thanks for taking the time to reply!
@@alexadamou7039 I was confused by the same point. Your explanation here makes the example much clearer - I might suggest pinning it to the top of the comments section if possible.
Any kind of ageing effects feature in the theory of ergodicity economics?
Yes. For general wealth dynamics, we can have a situation where some terms dominate at short times and other terms dominate at long times. In effect, the character of the dynamics change over time, which can cause some observables of the model to exhibit ageing. I can't go into detail in a CZcams comment but we will discuss this in our textbook, which should be released later this year. Please keep an eye on ergodicityeconomics.com/lecture-notes/.
Nice. So, say a system is ergodic, what does that actually tell us about the "problem/research Q" that can used in a useful way ?
Excellent video, especially the section on Examples. I have a question about the example of multiplicative growth. I don't understand how all the trajectories can be decreasing but the ensemble average is increasing. What do you mean by an ensemble average here? Is it not just the average over all ensembles at a single time step?
Also, does non-ergodicity fundamentally mean that there is a dependence on the initial value, or more largely, on the path?
It is because the exponentially decaying trajectories will never go below 0 - just between 0 and 1. So even if you have a very small amount of trajectories growing exponentially, they will dominate the ensemble average, which is just 1/N * sum{x_i}.
I was thinking it would have been easier to see if he included 1 trajectory that grows only exponentially in the figure
@@elvispiss Ah, if the decreasing trajectories are bounded between 0 and 1, then it makes sense for the average to explode because of an exceptionally high value.
as I understand it, it's the diffence between arithmetic mean (ensemble average) of 1.05 and geometric mean of sqrt(1.5*0.6)~0.9487. with each repeated play of this game wagering all your wealth, the expeted return moves from the arithmetic mean to the lower geometric mean. In the end, it has a negative expected return as losing 40% half the time would require you to win 66.67% (instead of 50%) half the time to make up for it.
讲的很好~非常感谢!
sorry for the dumb question: why in the last example the probability of half it goes above 50% (so 1.5) but down 40% (0.6) and not 50% as well (0.5).
Great presentation! Thank you! What is the connection between your 'equation of life' and a Markov chain?
It was brilliant thanks a ton!
Thank you very much!
Isn't ergodicity usually defined in terms of visiting all of the space, and then equality of space and time averages is derived from that? Where the main difference would just be whether the space can be divided into areas with orbits that stay within them?
Yes, sometimes ergodicity defined as you describe. The basic definition I give in section 1 of the talk would be called "mean ergodicity" in some communities. It is the definition we use most often in the EE research programme, because it is simple and adequate for our applications. However, I do discuss exploration of state space - which is the picture commonly used in dynamical systems theory - in the section 2 of the talk, starting at 6:15.
As per my understanding the concept of "ergodicity" is a statistical mechanical concept where you create replicas of the same system and take an ensemble average and argue that it should be equal to a time average or a trajectory level average (which is what we observe experimentally)...but that there are caveats! Since you can basically equate the response of a system to external perurbations in that framework of trajectory-based averages...then the question of stable averages creeps in which boils down to a convergence of probability measures! The interesting thing then becomes looking for alternatives to the basic a prior notions of ergocity and delve into the realm where non-ergodic behavior is the most observed phenomena. then as somebody from EE background commented what the understand about the problem with regards to some sort of linear-response embedded in theorems like Wiener-Khintchine which relates the Fourier Transform of the power spectral density with the auto-correlation function of the inherent randomness or noise in noisy circuits! There exists generalizations at present which go beyond this linear-response and ergodic Wiener-Khintchine regime and talks about nonergodic behavior or ageing!
Ergodic control of diffusions is a pretty related and well-researched topic!
Very clear and concise! Thank you ε>.
From my pov, the presentation would be clearer if examples would be #2, not #3.
Nice explanation, who's the author of the paiting in the back?
Perhaps a silly question, but it's not clear to me as to what things can be defined as ergodic. It's stated repeatedly in the lecture that "processes" can be defined as ergodic or not (e.g., pick/adding green and red balls, measuring the position of gas particles in an ensemble/system, flipping of a coin, etc..). But it is never stated if the system used in observing the process is ergodic or not. Can a system be defined as ergodic or not based on the variable/state dependent process that is being observed? Furthermore, can a given system/ensemble be ergodic upon observing one process, but not in observing another?
So if I understand it right, suppose you have elections between 2 candidates win possibility of one candidate is say %55 and gets the % 55 of all votes it is a special case of ergodic process?
Processes that create "structure" or "have memory" are the non-Ergodic ones, correct? While, trajectories that have no structure-creating (or structure-destroying) dynamic tend to be Ergodic. I think An Important Point is that the iNon-Ergodic Processes must be both "Probability-Aware" and "Structure-Aware" in order to avoid an "Error Catastrophe". The mean-reversion example could be considered "structure-aware" once it is past the early phase of large movements. What type of process is probability-aware during this phase? (besides humans?)
You might be able to connect ergodicity and non-ergodicity to ideas about structure and memory, depending on how you define those things mathematically. I give some examples of path-dependence in the final section of the talk. However, my main aim here is to stress that ergodicity is actually a very simple concept: are time and ensemble average equal?
@@alexadamou7039 This was the best explanation I have ever seen. so, Thank you! It appears that any sensitivity to initial conditions is a sure sign of a non-ergodic process, and also unaccounted for by the naive ergodic assumptions. Perhaps in a future video you can show a counter-example! thx!
Fractional brownian motion with hurst exponent 0
I think it should be the same system but with different trials? Not the different systems?
When are we gettin more videos?:)
Two of my colleagues are filming as we speak!
@@alexadamou7039 Thanks:)
Jesus christ I love you guys.
Thanks very much, I am starting to learn about this important area. May I ask in the final example, what would then be an optimal decision-making strategy when faced with a situation such as in the final example, given that individual repeated trials result in a strong tendency for fortunes to decline over time, despite the ensemble average being positive (profitable)? Are there situations which actually behave as in the final example, but may falsely appear to be ergodic?
Ensemble avg caters to the market setter dealing with multiple traders while different trajectories represent buy-sell series of individual traders. I am not sure about what the optimal strategy might be
For as far as what I understand, it’s like you’re doomed for better or worse on an individual basis, and you’re trapped in the initial condition, i.e., your first move, and as time goes on, it reinforces. I don’t think there is an optimal decision strategy for individuals unless you have an infinite time horizon which allows you to “see” the whole trajectory.
I just read your insurance paper and it made a lot of sense but I can’t help wonder that Monte Carlo simulations can ‘prove’ a lot of these concepts that mathematicians struggle to explain with symbolic maths. You showed this in the example at the end of this video.
Now we have computers that can do Monte Carlo sims easily and quickly should we move on from symbolic maths to explain/prove/experiment these concepts?
I agree that simulation is easy and powerful with modern computers. It's an essential tool, but it's not the whole toolbox. Intuition and analysis are also important.
Quiz answer: a, right?
There's no right answer. At a deep level, it is a question about our uncertainty. If we know what aspects of an individual mean that a treatment will or will not work, then we can identify which people will and will not benefit. This will look like the second scenario. But when we don't know those things, it will look like the first scenario. Agree?
@@alexadamou7039 Agree... I think. At the level of the immune system it would seem that an individual's dose necessary to prevent the disease would vary over time based on others factors of the immune system's current functioning. On that basis it would seem that over a long enough time span the immune system would find itself unable to produce enough antibodies making it 70% effective in 100% of indviduals.
@@alexadamou7039 I need to also add that this video and the related material from Ole Peters on ergodicity has really opened my eyes to some malpractice in finance and economics. Thank you.
After further thought the best answer (but least likely to be correct) is neither, rather it means that it is 83.7% effective for 83.7% of the people 😉.
@@prestonsumner7969 The geometric mean is an important quantity in ergodicity economics!
That's why theoretical physics, like statistical thermodynamics, is considered high science. As they think about these deep concepts much before other subjects catch up on them.
Last example makes no sense. The ensemble average of infinitely many trajectories is 1*1.5*0.6=0.9!!! a negative trend not a positive one!
cómo me gustaría saber más inglés 😟
Bro i can't, go to dentist.