solving an infinite differential equation

Sdílet
Vložit
  • čas přidán 21. 08. 2024

Komentáře • 410

  • @ashtabarbor3346
    @ashtabarbor3346 Před rokem +637

    Props to the editor of these videos for adding the best video descriptions on CZcams

    • @MichaelPennMath
      @MichaelPennMath  Před rokem +221

      Awww thank you very much! that means a lot to me.
      -Stephanie
      MP Editor

    • @danyilpoliakov8445
      @danyilpoliakov8445 Před rokem +16

      Don't you dare to like Editors reply one more time. It is nice as it is😅

    • @jonasdaverio9369
      @jonasdaverio9369 Před rokem +1

      ​@@danyilpoliakov8445 It's still holding

    • @jongyon7192p
      @jongyon7192p Před rokem +1

      An infinite differential equation SCP that becomes a bear and eats you

    • @Errenium
      @Errenium Před rokem

      nice pfp

  • @a52productions
    @a52productions Před rokem +256

    Arguably the first method is also sketchy! I was always taught that that recursive method of dealing with infinite sums is dubious unless you can prove it converges another way afterwards. In this case convergence and equality is very easy to show, but that method can fail pretty badly for not-obviously-divergent divergent sums.

    • @TaladrisKpop
      @TaladrisKpop Před rokem +32

      Yes, for example, you can get the infamous 1+2+4+8+16+...=-1 or 1-1+1-1+1+...=1/2

    • @thomasdalton1508
      @thomasdalton1508 Před rokem +12

      Yes, if you are going to use that kind of method you really should check the solution actually works. In this case, you'll get 1/2+1/4+1/8+... which does converge and converges to 1, which is exactly what we need.

    • @Owen_loves_Butters
      @Owen_loves_Butters Před rokem +11

      Yep. Hence why you'll find videos online claiming 1+2+3+4+5...=-1/12, or 1+2+4+8+16...=-1 (both are nonsense results because you're trying to assign a value to a series that doesn't have one)

    • @gauthierruberti8065
      @gauthierruberti8065 Před rokem +1

      Thank you for your comment, I was having that same doubt but I didn't remember if the first method was or wasn't allowed

    • @plasmaballin
      @plasmaballin Před rokem

      This is correct. However, the solution obtained in the video can easily be shown to converge, so it is valid.

  • @terpiscoreis9908
    @terpiscoreis9908 Před rokem +53

    Hi, Michael! This is a great problem.
    You can see that the original does have infinitely many solutions (well, let's say candidates for solutions) by making a different choice of where to start the infinite sum on the right hand side.
    For instance, with y = y' + y'' + y''' + y^(4)..., instead move y' and y'' to the left hand side to obtain:
    y - y' - y'' = y''' + y^(4) = D^2(y'+y''+y''') = y''
    Thus the solutions to y - y' - 2y'' = 0 are also solutions to the infinite order differential equation. We recover e^(x/2) as a solution but also obtain a "new" one: e^(-x). However, the infinite sum of derivatives here doesn't converge.
    By an analogous argument, it looks like the solutions to y - y' - y'' - ... - 2y^(n) = 0 for a positive integer n might solve the infinite order differential equation -- assuming the infinite sum of derivatives converges.

  • @trevorkafka7237
    @trevorkafka7237 Před rokem +16

    Answer to the question about the finite version:
    If y=y'+y''+...+y^(n) and substitute y=e^(kx), we get 1=k+k²+...+k^n, so 1=((1-k^(n+1))/(1-k))-1. This can be rearranged to k^(n+1)+2k-1=0.
    In the limit as n->infinity, we can see that we must restrict |k|≤1. Furthermore, it's obvious k≠0, so 0

    • @fmaykot
      @fmaykot Před rokem +1

      I'm afraid the limiting procedure in case 2 is a bit more subtle than that. You did not take into account the fact that both r and θ can (and in fact do) depend on n. If θ ~ α/(n+1) as n -> inf, for example, then R ~ 1 as n -> inf and α = 2*pi*m for integers 0

  • @DanielBakerN
    @DanielBakerN Před rokem +73

    The sketchy solution is similar to using the Laplace transform.

    • @nafrost2787
      @nafrost2787 Před rokem +8

      I think using a Laplace transform is a slightily better solution, because it justfies treating the derivative operator as a number in the geometric series formula, because (if I remember things correctly) in the s domain the derivative operator is a number.
      Using Laplace transform also, if not solve completely, then at least, simplify the ODE's given in the end of the video, to polynomial equations that can be solved numerically, and it also helps explain why there is only one solution to the ODE of the infinite degree, even though in every finite case, there are n soltuions. This comes from the fact that a power series can have any number of roots, even though the nth partial sum, have n roots (it is a polynomial of nth degree), for example exp doesn't have any roots, even complex ones, and of course sin and cos have an infinite number of roots.

  • @stratehorthy3351
    @stratehorthy3351 Před rokem +17

    Here's one simplification to the last set of differential equations :
    y+y'+y'' + ... + y^(n) = y^(n+1) + y^(n+2) + .... --- (1)
    Adding y'+y''+ ... + y^(n) to both sides we get :
    y+2(y'+y''+ ... + y^(n)) = y' + y'' + .... --- (2)
    Differentiating (1) then adding y'+...+y^(n+1) to both sides we get :
    2(y'+y''+...+y^(n+1)) = y' + y'' + .... --(3)
    Comparing (2) and (3) we get :
    y=2y^(n+1)
    which matches with the start of the problem. If y=Ce^(ax), we can find that a is the (n+1)'th root of 1/2. I wonder if there are other solutions too !

    • @danielrettich3083
      @danielrettich3083 Před rokem +3

      I really liked the "sketchy" method, probably because I'm a physicist xD, and thus tried it on this generalized form of the problem. And it actually leads to the same simplified differential equation you got, namely y=2y^(n+1), which I find absolutely amazing

    • @PleegWat
      @PleegWat Před rokem +1

      @@danielrettich3083 Same here. Remember to include all n+1 (complex) branches of (n+1)√2 to get all solutions.

    • @weeblol4050
      @weeblol4050 Před 6 měsíci

      good job

  • @ManuelFortin
    @ManuelFortin Před rokem +55

    Regarding the missing infinity of solutions, one way of seeing where they go seems to be as follows. Differential equations of the form y = y'+y''+...+y(n) (y(n) is the nth derivative, not to be confused with y evaluated at n) are known to have solutions that are linear combinations of e^(ax), and we need to find the right "a". There are n "a" values. However, only one of them has |a|1. At least this is what it seems from playing with Wolfram Alpha up to n = 20. The problem is that y(n) = (a^n) y. Since |y|>0, if |a|>1, the value of y(n) diverges as n goes to infinity, whatever x is in y(x). Therefore, these solutions are not well-behaved, and we need to set their coefficient to zero in the general solution (linear combination of e^(ax), otherwise y is not defined). I guess there is a way to prove that only one of the roots has |a|

    • @whatthehelliswrongwithyou
      @whatthehelliswrongwithyou Před rokem

      but isnt y diverges if a>0, not a > 1? Also leaving only non divergent solutions is a great argument in physics, but here they are still solutions, nothing bad about divergence at infinity. at least that's what i think, might be wrong

    • @whatthehelliswrongwithyou
      @whatthehelliswrongwithyou Před rokem +3

      oh, the sum of derivatives doesn't converge at fixed x, the its a problem

    • @user-sk5zz5cq9y
      @user-sk5zz5cq9y Před rokem

      @@whatthehelliswrongwithyou yes y diverges as x aproaches infinity if a is positive, he was talking about the existance of the solution

    • @ManuelFortin
      @ManuelFortin Před rokem

      @@whatthehelliswrongwithyou Yes, that's what I meant. Sorry for the late reply.

    • @martinkuffer5643
      @martinkuffer5643 Před rokem

      We know the Cs are the roots of the characteristic polynomial of the equation. There are n roots (counting multiplicity) of a polynomial of degree n and thus n solutions. In the new equation this still holds, but now you have a "polynomial of infinite degree" i.e. an non-polynomic analytic function. These can have any number of roots (by the procedure you shown, where the roots go to infinity as you add terms to the series), and thus there can be any number of solutions to our original equation :)

  • @thegozer100
    @thegozer100 Před rokem +15

    Some information I found on the question: solving the differential equation for finite amount of terms is the same as solving the equation 1=sum_{j=1}^n a^j, where I used y=exp(a*x) as a trial function. When I plot all the solutions for a large n, the solutions lie on the unit circle in the complex plane, except for one point. The point that is supposed to be at a=1 lies at a=1/2. This would mean that when we take the limit as n goes to infinity all the points on the unit circle would somehow "cancel out" and the point at a=1/2 would remain.

    • @oni8337
      @oni8337 Před rokem +1

      how could i have forgotten about complex number branches

  • @lucyg00se
    @lucyg00se Před rokem +69

    This was such a fun one. You're absolutely killing it man

  • @driksarkar6675
    @driksarkar6675 Před rokem +34

    I think for the general problem at 9:07, you can just apply the first method, so you y+y’+... y(n)=y(n+1)+(y+y’+...y(n))’. When you expand the derivative, everything except y(n+1) and y cancel out, so you get y=2*y(n+1). From there it’s relatively straightforward, and you get y=C*e^(x/(2^(1/(n+1))*e^(2*pi*i*m/(n+1)))) for a real number C and an integer m. That means that you actually have n+1 families in general, so the full solution is a linear combination of these.

    • @rohitashwaKundu91
      @rohitashwaKundu91 Před rokem +2

      Yes, I have done the same thing but isn't the solution coming as y=Ce^(x/(2^(1/n)))?

    • @mathieuaurousseau100
      @mathieuaurousseau100 Před 9 měsíci +1

      @@rohitashwaKundu91 It should be y=Ce^(ax) where a^(n+1)=1 (with C a complex number, I don't know why, they said real) and the number such as a^(n+1)=1 are the 2*m*pi/(n+1) with m integer between 0 and n (included)

  • @kasiphia
    @kasiphia Před rokem +16

    I think a really good idea for a follow-up video would be an explanation of why we don't have infinitely many linearly independent functions that solve the equation. Or perhaps they do exist, and that could be shown. I've noticed that when substituting in these infinitely recursive relationships, we often lose generality. For example, for the function y=x^x^x^x^x... we can do a similar substitution as we did in the video and find that y=x^y, which produces many solutions but only for 1/e

  • @mizarimomochi4378
    @mizarimomochi4378 Před rokem +53

    If you decide to associate the 3rd derivative and so on, you get that's the 2nd derivative of y, in which you get y = y' + y'' + y'', and you get the family of solutions y = c_1e^(x/2) +c_2e^(-x). So we do get infinite families of solutions, but it's a matter of where we associate. If we start with the 4th derivative, we'll get 3 solutions as we have y in terms of the first, second, and third derivative. And so on.

    • @patato5555
      @patato5555 Před rokem +6

      You can take this a bit further by noting the characteristic polynomial of keeping the first n derivatives will factor as (r-1/2)(1+r+r^2+…+r^(n-1)). In general, y=ce^(rx) where r=1/2 or r is a root of 1+x^2+…+r^n for some n. Of course, there could be more solutions than these.

    • @mizarimomochi4378
      @mizarimomochi4378 Před rokem +2

      @@patato5555 I agree. Except they'd be roots of 2x^n + x^(n - 1) + ... + x - 1 if I'm not mistaken.

    • @patato5555
      @patato5555 Před rokem

      @@mizarimomochi4378 if you set the expression equal to 0, divide by 1/2 and then factor out the r-1/2 they will be equivalent.

    • @mizarimomochi4378
      @mizarimomochi4378 Před rokem

      @patato5555 Sorry, I didn't notice the first time. My bad.

    • @patato5555
      @patato5555 Před rokem +1

      @@mizarimomochi4378 No worries!

  • @aceofhearts37
    @aceofhearts37 Před rokem +24

    For the follow-up questions, you can bracket the first one as (y + y') = (y'' + y''') + (y^(4) + y^(5)) + ..., and therefore defining z = y + y' this becomes z = z'' + z^(4) + ..., so the differential equation can be solved in two steps.
    This generalizes to the n case by defining z = y + y' + ... + y^(n) so that the DE can be rewritten as z = z^(n+1) + z^(2n+2) + ..., which by the same method used in the first half can simplify to z = 2z^(n+1). Then you get a sum of exponentials in the complex roots of 1/2 and throw that mess into the RHS of y + y' + ... + y^(n) = z.
    So y(x) will ultimately be a sum of complex exponentials but I imagine the coefficients would get messy fairly quickly.
    Edit: changed n to n+1 in the RHS of the rewritten equation, I had counted that wrong.
    Edit 2: actually not that bad, check replies.

    • @aceofhearts37
      @aceofhearts37 Před rokem +7

      So, actually not that messy. From now on I'll use Σ to mean the sum from k=0 to k=n.
      The solution to z = 2z^(n+1) is a function of the form z(x) = Σ (A_k)exp[(λ_k)x], where the A_k are any complex numbers and λ_k = [(1/2)^(n+1)] exp(2kπi/(n+1)) is one of the (n+1)st roots of 1/2.
      Therefore, the solution to y + ... + y^(n) = z will have a homogeneous part (a sum of exponentials involving the roots of 1 + λ + ... + λ^n = 0) and a particular solution, which we can assume has the form z(x) = Σ (B_k)exp[(λ_k)x], for some coefficients B_k that we have to compute.
      By comparing with the RHS we get (1+λ_k+...+λ_k^n)B_k = A_k, which by the partial sum of a geometric series and λ_k^(n+1) = 1/2 simplifies to B_k = 2A_k(1-λ_k). Since A_k can be chosen to be any complex number, B_k is also any complex number since 2(1-λ_k) is always nonzero.
      Then if we want real solutions we can pick the B_k to be complex conjugates as needed.

    • @Joe-nh9fy
      @Joe-nh9fy Před rokem +1

      @@aceofhearts37 This is what I worked out as well. Well actually I got y = 2y^(n+1) instead of z. I get this by using the original equation, and a second equation which is the derivative of the first equation. Solve for y^(1) in both equations. Then set those expression equal to each other and solve for y. But I believe your general function is the solution for y

    • @matteopriotto5131
      @matteopriotto5131 Před rokem

      ​@@aceofhearts37 lambda_k should be {(1/2)^[1/(n+1)]}exp(2k(pi)i/(n+1)) I think

    • @aceofhearts37
      @aceofhearts37 Před rokem

      @@matteopriotto5131 You're right, good catch.

    • @matteopriotto5131
      @matteopriotto5131 Před rokem

      @@aceofhearts37 glad I helped

  • @matthewrorabaugh1497
    @matthewrorabaugh1497 Před rokem +9

    For one of the follow-on questions there is a cute result which pops up. f=f'+...+f(n) when n is congruent to 1 mod 4. In that case you can use a sine function because the other derivatives cancel themselves out. I was looking for ways to fit this self-canceling concept into the other finite equations, but I have been unsuccessful.

  • @TaladrisKpop
    @TaladrisKpop Před rokem +32

    Like everytime when using algebraic manipulations with series (or more generally, limits), one should carefully check about the convergence. Without it, the first method only shows that, IF a solution exists, then it has to be of the form y=Ce^(x/2)

    • @honourabledoctoredwinmoria3126
      @honourabledoctoredwinmoria3126 Před rokem +2

      It's a fair point, but Y(n) of Ce^ax = (a^n)Ce^ax. So what we actually have here on the RHS is a geometric series (1/2 + 1/4 + 1/8...)Ce^(x/2), and on the left: Ce^(x/2). They equal each other if and only if that geometric series converges to 1, and of course it does. It's a valid solution, and I suspect it is the only valid solution. There are other apparent solutions, but they do not actually converge.

    • @TaladrisKpop
      @TaladrisKpop Před rokem +1

      @@honourabledoctoredwinmoria3126 Yes, convergence is not difficult to check, but it shouldn't be left out

    • @broccoloodle
      @broccoloodle Před rokem

      Well, you first assume a solution exists, you find all solutions, then later on you remove all solutions that do not converge. I find nothing wrong about that logic

    • @TaladrisKpop
      @TaladrisKpop Před rokem

      @Khanh Nguyen Ngoc Did I say the opposite? But where in the video do they eliminate the divergent solutions? If not done, the solution of the problem is incomplete.

    • @broccoloodle
      @broccoloodle Před rokem

      @@TaladrisKpop I think verifying the solutions not diverging is too obvious that Michael chose not to show it on the video. What he wanted to deliver to us is actually the second way and triggering our curiosity on additional problems in the video.

  • @16sumo41
    @16sumo41 Před rokem +13

    Lovely problem! And lovely follow up question ^^. Something really aesthetically pleasing in this problem. Maybe it has to do with the perceived difficulty of solving it, ending in a really nice and simple solution. Lovely.

  • @dmytryk7887
    @dmytryk7887 Před rokem +2

    For the truncated version: y=y'+y''+y'''+...+y(n) let r be a root of x+x^2+x^3+...+x^n=1. Then it is easy to show that y=exp(rx) is a solution to the truncated equation. Since there are n such roots this gives you the basis of the expected n dimensional solution space: exp(r_1 x), exp(r_2 x), ...,exp(r_n) x
    Now the hand-wavey part : as n approaches infinity, the equation x+x^2+...+x^n=1 approaches x/(1-x)=1 which has the unique solution x=1/2 as found in the video. Not really satisfying. I feel there is a nicer geometric argument, but I don't see it as of now.

    • @alexsokolov1729
      @alexsokolov1729 Před rokem

      You can simplify your characteristic equation using formula for sum of geometric series:
      (x^(n+1) - x) / (x - 1) = 1
      which is the same as
      x^(n+1) - 2*x + 1 = 0, x != 1
      It is easy to show that the function f(x) = x^(n+1) - 2*x + 1 has exactly 2 real roots for odd n and 3 real roots for even n. Excluding x=1 will give us 1 or 2 real solutions depending on parity of n. I guess these observations show that an infinite equation from the video has no more than 2 real solutions. However, there are complex solutions, which should also be considered

  • @BackflipsBen
    @BackflipsBen Před rokem +2

    That perfect infinity symbol at 4:45 touched my soul

  • @DiracComb.7585
    @DiracComb.7585 Před rokem +2

    This honestly doesn’t look awful. The real issue is you need to be careful of the functional analysis of the derivative operator.

  • @Yossus
    @Yossus Před rokem +9

    I love these videos for two reasons: one, the insight on the maths itself, two, the insight on how to cleanly draw the symbols!

  • @davidblauyoutube
    @davidblauyoutube Před rokem +61

    I immediately thought of the "sketchy" solution with D as a linear operator 😆. When the characteristic "polynomial" is actually not a polynomial because it lacks a finite degree, then usually there's some formula that can be applied to its coefficients (otherwise, how would you define it?). In that case, my hunch is that there's some manipulation that can be performed along the lines of techniques used with generating functions and recursive sequences that will produce a diffeq having an order equal to the degree of the formula.

    • @PeterBarnes2
      @PeterBarnes2 Před rokem +5

      I prefer using a slightly more direct approach to using linear operators.
      [1]y = [1/(1-D_x) - 1]y {|y'/y| < 1 (?)}
      (This is equivalent to the given equation, in terms of Differential Operators, with the condition (which might not be necessary) coming from 1/1-s having a pole at s=1. This pole should manifest as divergence in certain exponential solutions, namely those with parameter 's' (from e^sx) outside the radius of convergence of this 'definition of 1/1-s.' I say it 'should' manifest this way, but this theory is not developed enough to be certain of the divergence, at least to my knowledge. Fortunately the final solution satisfies this condition anyway, so it is not repeated.)
      0 = [1/(1-D_x) - 2]y
      (Moving terms between sides of the equation, as both operators are operating on the same term 'y.')
      1/(1-s) - 2 = 0
      (The exponential solutions of any (there is a theorem I've discovered, more or less, to this generalization from polynomials to any function, indeed) Constant-Coefficient Linear DE are found by using the characteristic equation to find the eigenfunctions of the form e^sx, with s the characteristic equation's independent variable.)
      1 - 2(1-s) = 0
      -1+2s = 0, s=1/2
      (Just algebra, here. Having solved for 's,' e^sx are our eigenfunctions, thus:)
      y = Ce^(x/2)
      Really a very short and simple approach. Now, if you want a more difficult approach, you can use the fact that
      [1/(s-D_x)]
      is a variation of the Laplace transform, remembering that
      [e^(bD_x)]f(x) = f(x+b) and
      int{0, inf} e^-at dt = 1/a
      and then you can try to solve the resulting integral equation. It's a good bit of fun, and certainly possible, if a little unnecessary in this problem.
      [Edit: I did this without watching the video first. My mistake, it's almost exactly as presented! Oh well...]

    • @ilonachan
      @ilonachan Před rokem

      What's really great here is that we don't actually need to get all that convoluted to get rid of the sketchiness, and just not do the step with the weird "function division" thing. While we often write the geometric formula as that ratio, its derivation works in any ring if we just skip that final simplification! So with our present ring of linear functors, where addition is adding the results, multiplication is chained application, and division is not generally defined, we can still just skip directly from the (1)y=(sum)y description to the (1-D)y=Dy statement.
      ...although, does D^(n+1) "converge" in some meaningful way? that'd be required for the infinite case, right? the finite case ofc just gives us a relatively simple degree n+1 differential equation, but I forget how exactly those are solved rn...

    • @PeterBarnes2
      @PeterBarnes2 Před rokem +1

      ​@@ilonachan x^n doesn't converge over all x. The domain for D^n to converge over is the space of functions. That's a pretty broad domain, so I prefer to stay within the complex meromorphic functions. (Which, despite including complex functions, is much more restrictive and well-behaved.)
      I'm pretty sure of these two things:
      One of these extended differential operators f(D_x) converges for an exponential function e^sx if and only if the function f(s) converges at 's.' As well, polynomials converge if f(0) converges, and polynomials times exponentials P(x)e^sx converge when e^sx converges. This much I'm fairly confident about.
      Further, other functions than exponentials or polynomials converge for a given differential operator depending on how the function is expressed. For example, a taylor series may diverge on its terms alone, but an exponential times a taylor series may converge absolutely, even when the exponential times the series equals the original series. More than that, integral expressions of some function might converge or diverge if they contain exponential terms that remain inside or go outside, respectively, the domain of convergence of the differential operator. This much is actually given (I think) by the previous thing.
      I have no idea about functions which are in no way expressed as exponentials or polynomials. Not just regarding their convergence under various differential operators, but even how to evaluate them.
      There is something which can, theoretically, help. Functions of the derivative applied to functions of the variable can be reversed:
      [f(D_x)] (g(x)*y(x)) =
      [[g(D_z + s)]{z=D_x} f(z)]{s=x} (y(x))
      It's messy, but cleans up when y=1:
      [f(D_x)] g(x) =
      [g(D_z + x)]{z=0} f(z)
      This allows you to evaluate some expressions more easily. Because it's easy to evaluate exponentials of derivative operators (e^bD is the shift operator by 'b'), and polynomials are basically given (D^p is the pth derivative operator for p a natural number) you can basically evaluate any differential operator on functions expressed in terms of exponentials and polynomials. This works when the exponentials or polynomials are under an integral, or in a sum, or up a tree, anything!
      (By 'up a tree' I'm not actually referring to anything specific. For example, I don't mean towers of exponentials: I am still working on exponentials of polynomials e^(x^p), as they do not behave at all. [e^e^D]y=0 might be the DE for which the gamma function is the solution. Or maybe not, it's hard to tell. Maybe with a minus sign somewhere, but then it doesn't work, it's rather confusing, actually.)
      The fact that exponentials behave better than polynomials motivates me to try and express one in terms of the other. So far I've found one expression which requires a limit, which isn't satisfactory. I've looked at distributions (a generalization of functions), and found a way of getting to it from what are basically derivatives of the sign() function. This, interestingly, gives the exact same result with the limit and everything. I've looked at expressing the logarithm, which also gives the same exact result. Maybe thinking from polylogarithms, or something else entirely? Very uncertain.

    • @sirlight-ljij
      @sirlight-ljij Před rokem

      D is an unbounded operator, so the geometric series requires some assumptions to be made for it to converge

    • @PennyAfNorberg
      @PennyAfNorberg Před rokem

      @@sirlight-ljij I guess that why the soloution was schecty, and i start thinking how to check that |D|

  • @Horinius
    @Horinius Před rokem +9

    @10:15
    y + y' ≠ y'' + y'''
    "But I'll let you do it as homework" 😆😆

    • @weeblol4050
      @weeblol4050 Před 6 měsíci

      trivial y + y' = y' + 2y''

    • @Horinius
      @Horinius Před měsícem

      @@weeblol4050
      No, it is not. I don't know how you got the y + y' = y' + 2y''
      My comment actually told viewers that Michael made a mistake at @10:15.
      The correct answer should be
      y + y' = 2 y'' + 2 y'''

    • @weeblol4050
      @weeblol4050 Před měsícem

      @@Horinius y + y' = y''+(y''+ y'''+...)' = y''+(y+ y')' = y'' + y' + y'' = y'+2y'' If you can find a mistake it would be really helpful

    • @weeblol4050
      @weeblol4050 Před měsícem

      @@Horinius but yours also works y + y' = y'' + y''' + (y'' + y'''...)''=2y''+2y''' Lets observe 2x^2 - 1=0 and 2x^3 + 2x^2 - x - 1 = (2x^2-1)(x+1)=0. now lets observe y + y' = y'' + y''' + y^(IV) + (y'' + y''' +...)'''=y'' + 2y''' + 2y^(IV) and 2x^4 + 2x^3 + x^2 - x-1=(2x^3 + 2x^2 - x -1)(x+1) - 2x^3 + x=2(x^2-1/2)(x+1)^2 - 2x^3 + x=0 so there are god knows how many solutions ( oo ). Some of the solutions are y(x)= Ae^(-x) + Be^(x/sqrt(2)) + Ce^(-x/sqrt(2)). So you are correct in some way you found also the solution that I wrote with a constant A I found only 2

    • @weeblol4050
      @weeblol4050 Před měsícem

      @@HoriniusLets also check y = y' + y'' + (y' + y''+...)'' =y' + 2y'' , 2x^2+x-1 = (2x-1)(x+1) so here also is the e^(-x) a solution so 3:32 is also incomplete. Just for sanity lets check y = y' + y'' + y''' + (y' + y'' + y''' +...)'''= y' + y'' + 2y''' and 2x^3 + x^2 + x -1=(2x^2 + x-1)(x+1)-x^2+x=0 so this one doesnt work and yields even more solutions god I dont want to check anymore this is cursed. I guess it is to be expected from an infinite order differential equasion to have infinitly many solutions

  • @Tehom1
    @Tehom1 Před rokem +15

    Did Michael escape? Will he be able to cut his way out of the belly of beast with only the Heaviside operator? Stay tuned, viewers! 😮

  • @chimetimepaprika
    @chimetimepaprika Před rokem +7

    Ahh, three seconds in, "The trivial solution works beautifully."

  • @marchenwald4666
    @marchenwald4666 Před rokem +1

    As a general solution to the problem around 9:00 :
    For n terms on the left, the functions satisfying the equation are y = C * e ^ ( ( (1/2) ^ (1/n) ) * x )

  • @anggalol
    @anggalol Před rokem +17

    Well, that is totally unexpected to separate the differential operator💀

  • @jiantaoxiao2481
    @jiantaoxiao2481 Před rokem +8

    Here's an operator ordering issue. You have to prove D commutes with 1/(1-D) before acting on both LHS and RHS an 1-D. (1-D)y=((1-D)D(1-D)^(-1))y it truly is.

    • @jamiewalker329
      @jamiewalker329 Před rokem +2

      Err, that's trivial, the commutator of any function of an operator with any other function of that same operator is 0. Non trivial commutation relations come from operators being distinct, or distinct components of vector operators.

    • @reeeeeplease1178
      @reeeeeplease1178 Před rokem

      You can "factor" a D out from the series *to the right* and then use the geometric series trick to avoid this problem

    • @jiantaoxiao2481
      @jiantaoxiao2481 Před rokem

      @@jamiewalker329 yes. You are right. [f(D), g(D)]=0

    • @jiantaoxiao2481
      @jiantaoxiao2481 Před rokem

      @@reeeeeplease1178 yes. Thanks.

    • @jiantaoxiao2481
      @jiantaoxiao2481 Před rokem

      f and g has D^n as basis and D^n's coefficient should be constant.

  • @techno2371
    @techno2371 Před rokem +6

    I did it in a less elegant way:
    Since this is a homogeneous differential equation with constant coefficients, you assume the solution is in the form of ce^(rx). Differentiating this solution and diving by ce^(rx) (it can never be 0) you get 1=r+r^2+r^3... adding 1 to both sides gives you 2=1+r+r^r^3...=1/(1-r) (|r|

  • @TechnocratiK
    @TechnocratiK Před 9 měsíci

    The 'sketchy' approach is probably made a bit more formal by taking the Laplace transform of both sides. The result is then that Y = (s / (1 - s)) Y, and the solution follows multiplying through by (1 - s) and taking the inverse transform. This also permits us to consider solutions to y + y' + ... y(n) = y(n + 1) + ..., (where y(k) is the kth derivative of y) since we would have:
    (1 - s ^ (n + 1)) / (1 - s) Y = (s ^ (n + 1)) / (1 - s) Y
    Rearranging,
    Y = 2 s ^ (n + 1) Y
    and transforming back:
    y = 2 y(n + 1)
    The resulting basis of n+1 functions is z_k ^ x for k = 0..n where z_k are the n+1 complex roots of 1/2 (a real basis also exists). The case solved in this video was n = 0.
    There are two assumptions made here. First, that the solution y has a Laplace transform and, second, that the resulting geometric series converges (i.e., |s| < 1). Disregarding the second assumption, we can then ask (for n = 0) whether there exists |s| >= 1 for which s + s ^ 2 + ... = 1.

  • @Blackmuhahah
    @Blackmuhahah Před rokem +1

    Extending the case with finite n to solutions of the form y=e^(a x) you get 1=a+a^2+...+a^n. In the limit as n->\infty you get a=e^(i\phi), where 0

  • @stefanisraelssontampe996

    Consider Q1 letswrite the eq again
    y = y' + y'' + y''' +...
    The first solution translates to, as done in the video to,
    y = 2y'
    So let's do it differently
    y = y' + y'' + D²(y' + y'' + ...) and we get the equation,
    y = y' + 2y''
    the charactersitic polynomia are then,
    1 = 2r
    1 = r + 2r²
    1 = r + r² + 2r³
    1 = r + r² + r³ + 2r³
    These characteristic equations can be rewritten as,
    (2r-1) = 0
    (2r-1)(r²-1) = 0 , r not equal to 1
    (2r-1)(r³-1) = 0 , r not equal to 1
    ...
    And we see that the only solution that converges when we put it into the defining equation are r=1/2
    But if we generalize the infinite summation definition we can probably conclude a solution like,
    f(x) = Cexp(x/2) + \int exp(x exp(iu))\my(du)
    For Q2 similarly you have the characteristic equation,
    (2r^k -1)(r^m - 1) = 0, r not equal to 1, k terms in the bigining and m terms to the right before we do the recursive step.
    We will the get the corresponding solution with r_ solution of r_n^k = 1 as
    f(x) = C\sum_n exp(x/2^{1/k}exp(ir_n)) + \int exp(x exp(iu))\my(du)

    • @stefanisraelssontampe996
      @stefanisraelssontampe996 Před rokem

      1+1+1+1+1+1 is not so nice so \my({0}) must be zero and one could probably discuss how the limit of u -> 0 of \my must behave, simplest is to assume the existence of open set U so that 0 \in U and \mu(U) = 0.

  • @blackfalcon594
    @blackfalcon594 Před rokem

    A nice (and seemingly related) parallel:
    The polynomial
    1 = sum_{j=1}^n x^j
    is a degree n polynomial and so has n (possibly complex) solutions. But when we take the infinite sum,
    1 = sum_{j>=1} x^j = (e^x-1) for |x| < 1
    we only get one solution, not infinitely many.

  • @59de44955ebd
    @59de44955ebd Před rokem +2

    On the first follow-up question y + y' = y{2} + y{3} + y{4} + ... :
    Taking the second derivative on both sides we get:
    y{2} + y{3} = y{4} + y{5} + y{6} + ...
    and hence:
    y + y' = 2 * (y{2} + y{3})
    (this factor 2 was missing in the video)
    By substituting z for y + y' we get z'' = 1/2 * z and therefor a solution z = c * e^(x/sqrt(2)).
    A simple real solution that solves the substitution and therefor the original equation is y = c * e^(x/sqrt(2)).

    • @59de44955ebd
      @59de44955ebd Před rokem

      Concerning the general equation y + y{1} + ... + y{n} = y{n+1} + ..., if we substitute z for y + y{1} + ... + y{n}, we get z{n+1} = 1/2 * z, and y = c * e^(x/(2^(1/(n+1))) is always a (trivial) solution.

  • @petersievert6830
    @petersievert6830 Před rokem +3

    10:09
    That is most definitely wrong.
    I think, it must be y + y' = y' + 2y''

    • @krisbrandenberger544
      @krisbrandenberger544 Před rokem +1

      No. y+y'=2(y"+y''') from doing something similar with the goal equation.

    • @petersievert6830
      @petersievert6830 Před rokem

      @@krisbrandenberger544
      Well, I am not wrong, I dare say.
      your equation is correct as well though. You cut off beginning after y''' and made the rest into (y+y')'' , while I did after y'' and made the rest into (y+y')'
      Honestly my equation seems much more futile to get to a solution though.

  • @deehobee1982
    @deehobee1982 Před rokem +4

    The differentiation operator is unbounded, so it's dubious to factor it out of an infinite sum like you did. I think what you've done here is solve the corresponding "infinite characteristic equation" for this DE, but that certainly doesn't show that the infinite sum of exponentials converges to e^(x/2).

    • @habibullah-ki7ok
      @habibullah-ki7ok Před rokem +3

      You are absolutely right. The differential operator is not continuous on tje space of smooth functions C^(\infty).
      Moreover,, you need the norm of D to be less than 1 to guarantee the sum makes sense.
      Nonetheless, this can be saved. Restrict the domain to the set of functions with norm less than one. Certainly the family Ce^{ax} is un this set for |a|

    • @deehobee1982
      @deehobee1982 Před rokem

      @habib ullah Haha, that sounds right. Thanks. I think another comment gave a procedure to generate a completely different DE using the same logic

  • @kennethvalbjoern
    @kennethvalbjoern Před měsícem

    LOL. The sum(D^n)=D/(1-D) operator expression is so cool. It won't surprise me, if the manipulations you did can make perfect sense in some formal way.

  • @petersamantharadisich6095

    For the finite sum, I get Cexp(ax) as a solution where a is a solution to the polynomial 2a(1-a^n)-1=0
    I get this by noting
    y'=y''+y'''+...+y[n+1]
    so we have
    y=2y'-y[n+1]
    if you use y=Cexp(ax)
    then you get
    Cexp(ax)=2aCexp(ax)-a^(n+1)Cexp(ax)
    or...
    1=2a-a^(n+1)=2a(1-a^n)
    In the limit as n goes to infinity, it requires |a|

  • @NathanSimonGottemer
    @NathanSimonGottemer Před rokem +2

    How do you know the sum on the right hand side converges? If you are working with a domain of real numbers for y the sum should diverge if x is positive, which makes me feel like this is a sort of Ramanujan-tier cheat code solution. Of course I still think it means something, just not the whole picture…if we take y(0)=0 then the Laplace transform will converge for |s|

  • @JamesLewis2
    @JamesLewis2 Před rokem +1

    When you started the "sketchy solution" I thought that you were going to start grouping from later in the equation, something like noting that y=y′+y″+(terms of the original expansion)″ and then getting the spurious solution family y=ce^−x, which if back-substituted results in basically saying that Grandi's series converges to −1; related to that, if you group it off after the nth derivative, you get an equation with characteristic polynomial 2r^n+r^(n−1)+r^(n−2)+…+r^2+r−1, which factors as (2r−1)(r^(n−1)+r^(n−2)+…+r^2+r+1), and the zeroes are ½ and the roots of unity other than 1, corresponding to spurious solutions equating 1 to the sum of a divergent series with terms that oscillate around the unit circle.

  • @Qhartb
    @Qhartb Před rokem +5

    The question I thought of as soon as I saw it was: y = y'/1! + y''/2! + y'''/3! + ...
    So a Taylor-series-looking differential equation. Possibly an application of your "what's exp(D)" result from another video?

    • @Kapomafioso
      @Kapomafioso Před rokem +2

      I also thought about that and how the argument shifts when exp(D) is applied. Then the equation essentially becomes: f(x+1) = f(x), which is a functional equation for any periodic function with period 1, instead of a differential equation. Infinite series of derivatives be weird and exotic like that. Sometimes it's not a differential equation at all, despite looking like one.

  • @garyknight8966
    @garyknight8966 Před rokem

    For class II, using the same method as first used, y+y' = y''+D(y+y')=2y''+y'; so y=2y'' with solution y=Cexp(x/\sqrt 2)+Dexp(-x/sqrt2) . The two independent parts arise because we implicitly involve the second derivative. Note the exponent factors 1, -1 are square roots of 1. The next class produces y=2y''' with solution y = Cexp(x/(2^1/3))+Dexp([]x/(2^1/3))+Eexp([]x/2^1/3) with [] the other cube roots of 1: -1/2+-\sqrt3/2 . Three independent parts due to a third derivative. And so forth ...

    • @garyknight8966
      @garyknight8966 Před rokem

      Oops .. the last [] factors I meant to be complex: -1/2+- i\sqrt3 /2 (of course). So these involve trigonometric functions (the even or odd components of exp (i \theta) )

  • @diszno20
    @diszno20 Před rokem +1

    I would love to see what happens when you choose different constants for the different derivatives, e.g.
    y = sum {from k=1 to inf} 1/k y^{(k)}
    Also it would be fun to plug some crazy sequence as constants. I.e. define a_n to be the nth digit of pi and calculate
    y = sum a_n y^k

  • @HarmonicEpsilonDelta
    @HarmonicEpsilonDelta Před 10 měsíci

    I find absolutely game changing the fact that applying the geometric series worked 😮😮

  • @3eH09obp2
    @3eH09obp2 Před 7 dny

    I thought there would be a lot of other ones that you get by lumping up the later terms to reduce to a finite order DE.
    y=2y'
    y=y'+2y''
    y=y'+y''+2y'''
    y=y'+y''+y'''+2y''''
    etc
    These are all reached by cutting the truncating the series anywhere and seeing the last term you included equals the sum of the infinite remaining terms.
    The indicial equations you get are 2m^n + m^n-1 + ... + m - 1 =0. and we study the roots of these equations.
    For the second-order one, we get the added root m=-1 but you see this is silly because then y=e^-x and after factoring out e^-x you get 1 = -1 + 1 - 1 + 1 - ... which is obviously not convergent. Here I realised we need |z|

  • @DavidSavinainen
    @DavidSavinainen Před rokem +2

    For the case
    y + y' + ... + y(n) = y(n+1) + ...
    you get, by the sketchy solution,
    y + y' + ... + y(n) = (D^[n+1]/(1-D)) y
    (1-D)(y+y'+...+y(n) = y(n+1)
    Notice that the LHS telescopes, giving only
    y - y(n+1) = y(n+1)
    or in other words,
    y(n+1) = y/2
    which has the solution set
    y = C exp[x/α]
    where α = 2^[1/(n+1)] * exp[ikπ/(n+1)] for all integers k such that 0 ≤ k ≤ n

    • @lunstee
      @lunstee Před rokem

      Careful with the telescoping; it only works correctly on the RHS infinite series when abs(D)

  • @DTDTish
    @DTDTish Před rokem +4

    We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have
    y=y' + y'' + ...
    Gives us the characteristic equation
    1=k+k^2+...
    We know that the geometric sum is 1+k+k^2 = 1/(1-k), which is the RHS plus 1. So we have
    1 =1/(1-k) -1
    We get
    K=1/2
    Or
    Y
    y=Ae^(k/2)
    The video did something very similar, but with operators

  • @jakubszczesnowicz3201

    I love the sketchy proof!!! Operator analysis looks so wild without context though. Like, that whole segment around 5:30 is crazy. If I saw (1 - D)^-1 as a high school student I would be mindblown, my teacher wouldn’t be able to hear the end of it

  • @NikitaGrygoryev
    @NikitaGrygoryev Před rokem

    I have a pretty wavehandy explanation for the uniqueness of the solution, for something more precise you might need to start thinking harder about what functions are we talking about. So for finite n you would solve the equation by substitution y=Exp(Ax). The characteristic equation is 1-2A+A^(n+1)=0 (where you should discard A=1). It's easy to see that in the limit n goes to infinity theres unique solution |A|

  • @omerbar7518
    @omerbar7518 Před rokem

    Integrate both sides, you get:
    | y dx = y + y' + y''....
    | y dx - y = y' + y''+.... = y
    Differentiate:
    y = 2y'
    y/y' = 2
    I smell an exponent
    Find that it's e^0.5x, you're done
    "Fearsom Equation"
    (Also a solution: just plain 0 but this is clear)

  • @anthonypazo1872
    @anthonypazo1872 Před rokem

    "Okay. Nice." 😂😂❤❤ love it every time I hear that.

  • @sebaufiend
    @sebaufiend Před rokem

    The first method I thought I was neat. I used geometric series but I didn't see a need to go through all that operator business. Something we learned in diffeq is that any linear differential equation system with constant coefficients will have solutions of the form A*exp(mx). And thus making this substitution into the equation we get
    1=m+m^2+m^3....
    The right hand side is very close to a geometric series which has the sum: 1+r+r^2+r^3...=1/(1-r), so if we subtract 1 from both sides we get
    r/(1-r)=r+r^2+r^3...
    So we sub this into our equation we get
    1=m/(1-m)
    The only value that gives us a solution is m=1/2. Thus the solution is y=C*exp(1/2*x)

  • @josephon63
    @josephon63 Před rokem +1

    I don’t understand why :
    - you can say that D(y+y’+…) = y’ + y’’+…
    - on which vector space 1-D is an isomorphism and the series (D^n) converges ?

  • @chrisdupre2862
    @chrisdupre2862 Před rokem

    I don’t know if this has been answered or not already, but one way to look at the non-existence is via the Fourier transform (a favorite for constant coefficient linear ODE). After some manipulation, you can see that the solution must solve \Lambda^{n+1} =2\Lambda -1. Now suppose n goes off to infinity. We break up looking for roots into three options: the modulus of lambda is greater than, equal to or less than one. In the greater than case, we cannot solve this as the left hand side is much much bigger than the right. In the equal to, the left hand side does not have a limit, so what do we even mean! In the less than case, the term tends to 0, so 2\Lambda -1 = 0 which recovers our start. Heres a follow up: is there a distribution of solutions around the unit circle that this approach’s? Is there a meaningful “Distribution of other oscillatory solutions at infinity “ ? Great video! It’s fun to see the resolvent pop up in the sketchy side!

  • @krisbrandenberger544
    @krisbrandenberger544 Před rokem

    Hey, Michael! So for the general case of the follow up question, we would have:
    y+y'+...+y^(n)=2*(y+y'+...+y^(n))^(n+1)

  • @blabberblabbing8935
    @blabberblabbing8935 Před rokem +1

    2nd solution:
    A) Why can we assume D represents a square linear transformation such that its power series makes sense?
    B) How can we justify that the geometric series transformation (for |base| < 1) is valid for such matrices?

    • @DavidSavinainen
      @DavidSavinainen Před rokem

      This is precisely why he called it a sketchy method

  • @haziqthebiohazard3661
    @haziqthebiohazard3661 Před rokem +5

    Off the top of my head my guess was exp(x/2)

  • @olli3686
    @olli3686 Před rokem +5

    10:14 Wait, what happened? He just completely ignored the remainder of D4y to DNy. If y + D1y = D2y + D3y + D4y + … + DNy, then why why just entirely drop the 4th derivative etc ????

    • @stewartcopeland4950
      @stewartcopeland4950 Před rokem +5

      it's more like y + y' = 2 * (y'' + y''')

    • @CISMarinho
      @CISMarinho Před rokem

      As @stewart said:
      y’’ + y’’’ + y⁽⁴⁾ + y⁽⁵⁾ +… = (y+y’+y’’ + y’’’ + )’’ = (y+y’ +(y+y’) )’’ = 2(y+y’)’’ = 2(y’’ + y’’’)

  • @theonearney205
    @theonearney205 Před rokem +3

    Elite thumbnail

  • @brianlane723
    @brianlane723 Před rokem +1

    The differential equation essentially becomes 1=1/2+1/4+1/8+1/16...

  • @anasselmoubaraki9410
    @anasselmoubaraki9410 Před rokem

    To answer your question Mr Penn i think that having one solution is a consequence of the analytical property of the solution and having an infinite sum forces the coefficient (a_k) in the analytical expression to be defined uniquely. Thank you for your amazing videos.

  • @kensmusic1134
    @kensmusic1134 Před rokem

    If we put the equal sign at the n-th term of the sequence, we should get the differential equation y=2y^(n+1). y+y'+...+y^(n)=y^(n+1)+y^(+1)+...=>y+y'+...+y^(n)=y^(n+1)+(y^(n+1)+...) y = 2 y^(n+1). This is much easier then the form proposed in the video, and should be solvable.

  • @jevinleno2670
    @jevinleno2670 Před rokem +1

    Hey Michael, for the first method - doesn't the sum law for derivatives only hold for finite sums? This method seems like it needs further justification.

  • @JohnSmith-zq9mo
    @JohnSmith-zq9mo Před rokem

    Note that we have a similar case for ordinary algebraic equations: the equation 1+x+x^2/2+..+x^n/n!=0 has n complex solutions, but if we take the limit we get an equation with no solutions.

  • @seneca983
    @seneca983 Před rokem

    9:20 Is there a nice solution. My answer is "yes". Just try a function of the form C*exp(k*x). You get the equation:
    1+k+k^2+...+k^n=k^(n+1)+k^(n+2)+k^(n+3)...
    Take k^(n+1) as a common factor from the right side.
    1+k+k^2...+k^n=k^(n+1)*(1+k+k^2...)
    Apply the formula for the geometric sum to the left and that of the geometric series to the right.
    (1-k^(n+1))/(1-k)=k^(n+1)/(1-k)
    Cancel out the common denominator and rearrange to get the following equation.
    k^(n+1)=1/2
    The solutions for k are just (1/2)^(1/(n+1)) times the appropriate roots of unity. Technically, I've not proven that there aren't solutions that aren't of exponential form but that seems pretty intuitive.

  • @elephantdinosaur2284
    @elephantdinosaur2284 Před rokem

    Looking at y = y' + ... + y^(n) has n independent solutions of the form y = a*exp(rx) where r is a root of r^n + ... + r = 1. This polynomial equation has the same roots as r^(n+1) - 2r + 1 = 0 excluding r = 1. Most of the roots of this polynomial equation lie outside the unit circle.
    If there was a root inside the complex unit circle with |r| < 1 then |1 - 2r| = |r|^(n+1) ~ 0 which heuristically implies r ~ 1/2. Working in the real numbers similarly shows there's a real root close to 1/2. Thus besides r ~ 1/2 all the other roots have |r| > 1. This of course isn't a rigorous proof, but just shows the intuition behind it.
    So in a non-rigorous way in the limit as n goes to infinity, the r ~ 1/2 solutions to the nth degree DEs converges to y = c exp(x/2) but all the |r| > 1 solutions have to die off otherwise they would introduce divergences.

  • @GeoffryGifari
    @GeoffryGifari Před rokem +2

    On the follow-up question, in the video you shifted the equal sign to the nth sum. Now, can we do this indefinitely, shifting the equal sign to the right to somehow "inverting" the sum of derivatives?
    y + y' + ... + y^(n) = y^(n+1) + y^(n+2) + .....
    to maybe
    lim m -> infinity { y + y' + ... + y^(m-1) = y^(m) ..... } ?

  • @juancristi376
    @juancristi376 Před rokem +4

    Nice video! I think you can apply the first method to also get
    y = y' + y'' + y''' + y'''' ...
    = y' + y'' + D²(y' + y'' + y''' + y'''' ...)
    = y' + 2y''
    For which you can get the solutions
    y = A exp(x/2) + B exp(-×)
    For all A, B real.
    This can be generalized to be equivalent to all differential equations of the form
    y = sum from n =1 to N of D^n y + D^N y
    For any N Natural
    Please correct me if I did any mistake!

    • @knisleyjr
      @knisleyjr Před rokem +1

      So there are in fact infinitely many solutions!!! Good find!!

    • @alexsokolov1729
      @alexsokolov1729 Před rokem +2

      Hmm, I agree with your idea, however, it doesn't seem to work properly. Let's substitute exp(-x) as our solution and divide by it both parts of equation. Then we get
      1 = (-1) + 1 + (-1) + 1 +...
      It is clear that the series in the right part is not convergent, since its partial sums are -1 and 0. However, the substitution of exp(x/2) will give us the correct identity
      1 = 1/2 + 1/4 + 1/8 +...
      The equality above can be easily verified using formula for sum of geometric progression.

    • @MK-13337
      @MK-13337 Před rokem +1

      You get an infinite family of "solutions", but the RHS of the DE will not converge anywhere for any of the other "solutions" and thus they can't really be solutions.

    • @tonybluefor
      @tonybluefor Před rokem

      The solutions y=A exp(x/2)+B exp(-x) satisfy the equation y=y'+2y''=(A/2) exp(x/2) -B exp(-x) +2((A/4) exp(-x/2) +B exp(-x))= A exp(-x/2) +B exp(-x) and are linearly independent. So it seems that there are indeed infinitely many solutions.

    • @tonybluefor
      @tonybluefor Před rokem

      Oh. I've just understood Alex Sokolov's comment. That means the assumptions that y'= y''+y'''+..... or y''=y'''+... can be misleading in infinite series.

  • @Anonymous-zp4hb
    @Anonymous-zp4hb Před rokem +1

    Isn't the general solution just e^(x/2^(1/n))?
    If the n=1 case is the main problem and n=2 is the first follow-up etc..
    In each problem, the right-hand side remains unchanged
    after taking the derivative, then adding the nth derivative of y.
    Do the same to the left and most terms cancel:
    2 (d/dx)^n f_n(x) = f_n(x)
    And so:
    f_n(x) = e^(x/2^(1/n))
    is a solution to the nth case.

  • @patato5555
    @patato5555 Před rokem

    When you move the equals sign around you don’t actually change the problem much. If our cut off is
    y+y’+…+y^(n) = y^(n+1) + …
    Then let g=y+y’+…+y^(n), and rewrite the RHS in terms of derivatives of g.

  • @mathunt1130
    @mathunt1130 Před rokem

    The answer to the question is simple. Look for a trial solution y=exp(mx), and you'll end up with a polynomial equation. Demonstrating that there are a finite number of solutions. You can't do this for an infinite series.
    My first thought was to take Fourier transforms.

  • @epalegmail
    @epalegmail Před rokem +1

    Love that thumbnail, lol

  • @guerom00
    @guerom00 Před rokem +4

    Is there a justification that the geometric series formula "seems to work" with a differential operator ?

    • @DTDTish
      @DTDTish Před rokem

      Not a mathematician, but my guess is that it is linear
      We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have
      y=y' + y'' + ...
      Gives us the characteristic equation
      1=k+k^2+...
      And use geometric sum from there.
      This basically does the same thing as the linear operator method, but a bit more simple (adding numbers instead of operators)

    • @guerom00
      @guerom00 Před rokem +1

      ​@@DTDTish yeah... Somehow, i don't have a problem with an object like exp(D) cause this series has an infinite radius of convergence. Here, i try to wrap my head around what a finite radius of convergence for this series means when applied to differential operators :)

  • @srahcir
    @srahcir Před rokem

    Given the geometric operator (partial)sums has a (1-D)^-1 in the denominator, take a look at what happens if you apply (1-D) to both sides of the equations:
    In the question, you get y - y' = y' - y^(n+1) or y = y'' + y^(n+1). Looking at this as a matrix system of differential equation, you can solve this to get the n linearly independent solutions.
    In the follow-up, you go from y+y'+...+y^(k) = y^(k+1) +... to y-y^(k+1) = y^(k+1). But this is just y=2y^(k+1), which can also be solved as a system of equations C_0 e^(r_0 x) + ...+ C_k e^(r_k x).
    Afterwards you would still need to show these constructed solution is actually solve the original system.

  • @ntuneric
    @ntuneric Před rokem

    i think some insight for the question at 7:23 is that the differential equation with finite number of terms n corresponds to a characteristic polynomial of degree n that has n roots, whereas the infinite one's polynomial is a power series which has a single root

  • @sleepycritical6950
    @sleepycritical6950 Před rokem

    Jesus christ you came at the right time i love yooooooouuuuuu i needed this desperately

  • @donmoore7785
    @donmoore7785 Před rokem

    Very thought provoking. I honestly found the "sketchy" solution very sketchy - I didn't understand the manipulations of the D operator.

  • @typha
    @typha Před rokem +2

    notice y = y'+ y'' + (y'+y''+y'''+...)'' = y'+2y''
    This gives you additional extraneous solutions that look like e^-x since that doesn't actually converge.
    Similarly we can find that y = y'+y''+2y''', and get a few more 'solutions' but they don't converge either actually.
    Maybe there are ones that do converge, maybe there aren't, I could do some more work and see but I'm in a bit of a hurry right now so someone else will have to :P

  • @8_by_8_battleground
    @8_by_8_battleground Před rokem

    Hi, Michael. For the general differential equation, I am getting two solutions. Either y can be ce^x or it can be a polynomial of degree (n+1) with the coefficient of the highest power being 0.5/(n+1)!.

  • @gnomeba12
    @gnomeba12 Před rokem +6

    I think the answer to the n linearly independent solutions question can be resolved in terms of the characteristic polynomial. For the finite polynomial of degree N, 1-x^1-x^2..., we are guaranteed N complex roots. The infinite "polynomial" converges to 1- x/(1-x), which has exactly one complex root.
    I am curious what happens to the other roots as N increases.

    • @markvp71
      @markvp71 Před rokem +2

      I think you meant one real root, namely 1/2. I suspect that the roots of 1+x+...+x^n, for increasing n tend to infinity except for one root that converges to 1/2. Solving 1=x+...+x^10 with Wolfram Alpha shows it has a root close to 1/2, but the other roots are not very big, maybe they tend to infinity quite slowly.

    • @gnomeba12
      @gnomeba12 Před rokem +1

      @@markvp71 No I did mean complex root. But, of course, its also real.

  • @Calcprof
    @Calcprof Před 9 měsíci

    I love the operational method. Heaviside would approve.

  • @byronwatkins2565
    @byronwatkins2565 Před rokem

    C can also be complex.
    Since all of the terms are positive (except y), the vast majority of the characteristic equation roots are complex and the solutions oscillate. The infinite case has an infinite series as its characteristic equation and all of the coefficients (except a_0=-1) are +1. This infinite set of complex roots may well provide a corresponding infinite set of linearly independent solutions, but I suspect that very few will be useful.

  • @chaosredefined3834
    @chaosredefined3834 Před 9 měsíci

    A comparable situation to the missing infinite of solutions is to consider e^x. We know that e^x = 1 + x + x^2 / 2 + x^3 / 6 + ... And if you truncate that at after the first n terms, we get n-1 zeros. But, if you don't truncate it, there are no zeros. Where did all the zeros go?
    And why did you ask that about the differential equation version, but not the taylor expansion version?

  • @MasterofNoobs69
    @MasterofNoobs69 Před 2 měsíci

    Here is another follow up question: what happens if instead of trying to add all of the derivatives, you tried to multiply them together? An obvious trivial solution is y=0, but is there any other solution?

  • @theupson
    @theupson Před 3 dny

    this example is sort of unsatisfying in that the most obvious basic approach [namely, for linear d.e. with constant coeffs, you use y=exp(rx)] works easily and naturally.

  • @mathieudeschenes7188
    @mathieudeschenes7188 Před rokem

    I’m quite late on this but for the general differential equation at the end, you can just write y^(n+1) + y^(n+2) + … = y^(n+1) + d/dx (y + y’ + … + y^(n)) and then lots of terms cancel out and you end up with y = 2y^(n+1) which gives y = c e^((1/(2^(1/n)) x) which I think is the only solution

    • @uniquespellingfayl
      @uniquespellingfayl Před rokem

      I guess that 2^(1/n) can be replaced with any of the n complex nth roots of 2: 2^(1/n)*e^(2pi*i*k/n) with integer k giving you n solutions

  • @IBH94
    @IBH94 Před rokem

    Well I got the initial solution with realizing that an exponential function will result a sum of a geometric sequence converging to 1 that’s how I got the 1/2… for there I realized that any parameter smaller than 1 would make a converging geometric sum and you can just subtract whatever the sequence converges to and add 1 to balance the equation (the constant will disappear after the first derivative

  • @kirllosatef1522
    @kirllosatef1522 Před rokem

    Didn't take differential equations in my life but tried to solve and got that y=e^1/2x
    Here is how I got it:
    Diff both sides we get: y'=y"+y"'+.... (1)
    From the equation: y-y'=y"+y"'+.... (2)
    From (1) and (2) we get that y-y'=y'
    Therefore y=2y' or y'=1/2y
    From that I tried and got the answer.

  • @koenth2359
    @koenth2359 Před rokem

    There's really no need for the fancy D/(1-D) stuff, you can just say:
    y = y'+y''+... (I)
    Differentiating (I) gives
    y' = y''+y'''+...(II).
    Substituting (II) into (I) now gives
    y = 2(y''+y'''+...) = 2y'

  • @danrakes2667
    @danrakes2667 Před rokem

    In response to your question at around 8:10, in the infinite series of y' example you DO get an infinite result. The infinite series is trapped inside e!

  • @IntegralKing
    @IntegralKing Před rokem

    Oh, I've got one! what about y = y'' + y''' + y(5) + ... where the primes are all prime (2,3,5,7, etc). Will that question wrap back to the Reimann Zeta function?

  • @l.h.308
    @l.h.308 Před rokem

    Is it not simpler to take derivative of the equation (y' = y" + y''' + ...), then subtract these two equations to get y - y' = y', most terms vanishing, thus y' = (1/2) y, y = C e^(x/2)? Convergence remains to prove but should be quite simple.
    Does this show that there can't be another family of solutions? (The trivial case of y = 0 (constant) is covered by C = 0)

  • @rainerzufall42
    @rainerzufall42 Před 10 měsíci +1

    Followup (I) is wrong! (y + y')" is BS. But y + y' = y" + (y + y')' = y' + 2 y" y = 2 y" y" = 1/2 y. Suggestion: y(x) = A exp(x / sqrt(2)) + B exp(- x / sqrt(2)).

    • @rainerzufall42
      @rainerzufall42 Před 10 měsíci +1

      The solution of @driksarkar6675 is very similar to mine, check his solution (linear combination of (n+1) terms for the (n+1)th root)...

  • @victorrielly4588
    @victorrielly4588 Před rokem +1

    I believe it can be proven that the only solution is the one presented in the video. This is because any solution to y= y’ + y’’+… must also, by simple algebra satisfy y = 2y’ as was shown in the video, and the only solution that satisfies y = 2y’ is the presented solution. Indeed, suppose some other function solved the problem, call that function p(x), then p(x) = p’(x) + p’’(x) + …, but this means p(x) = p’(x) + (p’(x) + p’’(x) + …)’ so p(x) = 2p’(x) thus p satisfies y = 2y’

  • @deathguitarist12
    @deathguitarist12 Před rokem

    I haven't worked this out, but I see a common element between the infinite differential problem and the finite problem. The solution to the infinite differential can infact be written as a linear combination of functions y_i, if you were to expand the exponential Cexp(-x/2) as a taylor series. My suspicion as that the solution to the finite differential version of this would just be the n-term taylor expansion of the exponential solution. But I can't be sure with out working it out.

  • @tylerduncan5908
    @tylerduncan5908 Před rokem

    Intuition tells us that a less restriced input will lead to a greater degree of freedom of the output, and considering that "n" parameters must satisfied, the solution space collapses to a line as n --> ∞

  • @georgewu5885
    @georgewu5885 Před rokem

    So the other challenging question, if we let z=y+y'+y"+...+y^(n)..., then we have z=z'+z"+..., and thus z=c*e^(x/2), and the original equation for the sum of y, its derivative all the way to the nth derivative becomes solving this nth derivative equation: y+y'+y"+...+y^(n)=C*e^(x/2).

  • @Minecraft2331
    @Minecraft2331 Před rokem

    I'm pretty sure you would just apply an infinite reduction of order, and basically end up with the sum from n=0 to infinity of (c_n*x^n*e^(x/2)) basically creating a general solution for the infinite roots. I think some sort of induction proof that this format of (c_n*x^n*e^(x/2)) works for any n in the naturals combined with a sequence proof to show this works as n->infinity would be necessary to prove this is the case, but that's my initial intuition of how to show there are in general infinite general solutions.

  • @BenfanichAbderrahmane
    @BenfanichAbderrahmane Před rokem +3

    The operator d/dx is not continuous so you can't do that I think ?

  • @tobysomething3742
    @tobysomething3742 Před rokem

    I think the reason you don't have more solutions in the infinite case, is that the solutions are of the form c*e^(ax) where (sum from i=1 to n of a^i)=1, and apart from a around 0.5, the solutions for a approach the unit circle, in the limit once they "reach" the unit circle their powers can't sum to one, they must cancel to be 0, so the solutions apart from a=0.5 in the finite case don't have a corresponding infinite case solution

  • @Achill101
    @Achill101 Před rokem

    I see a solution: y(x) = exp(x/2)
    Then on the right side is a sum of the exp-functions with the coefficients 1/2, 1/4, 1/8, 1/16 ... which adds up to the coefficient 1 on the left side. Now I'll see if Michael has a class of solutions.

    • @Achill101
      @Achill101 Před rokem

      Interesting question at 7:42