Komentáře •

  • @Nikolas_Davis
    @Nikolas_Davis Před 2 lety +122

    Me, as a physicist, watching Michael "play fast and loose" with notation:
    -Good, good... embrace the dark side...

  • @omeragam8628
    @omeragam8628 Před 2 lety +73

    Wow never thought about plugging an operator into a generating function that's crazy!

    • @kristianwichmann9996
      @kristianwichmann9996 Před 2 lety +18

      Physicists everywhere: Of course you can! :D

    • @hybmnzz2658
      @hybmnzz2658 Před 2 lety +2

      Linear algebra moment

    • @Debg91
      @Debg91 Před 2 lety +7

      @@kristianwichmann9996 as a theoretical physicist with interest in pure mathematics, my thought is: "I see no reason why this shouldn't work, but nevertheless I would like to see a rigurous explanation"

    • @JJ-tp2lc
      @JJ-tp2lc Před rokem

      If you think that is crazy, just watch Supware's video on operational calculus

  • @jkid1134
    @jkid1134 Před 2 lety +27

    Very excited to plunge into this. In high school, I did the first 10 power sums using matrix reductions, all on one sheet of graph paper with a LOT of erasing, and all on some shaky assumption of existence. So let's see what happens when instead of pencil and paper, we take a Penn to it!

  • @leif_p
    @leif_p Před 2 lety +40

    3:49 There is a small mistake, the first sum should have been reindexed to start at m=0 to apply the Cauchy product formula: That changes the r to an r+1 in the formula at 6:10

    • @josealbertomelendezpina3084
      @josealbertomelendezpina3084 Před 2 lety +8

      Even more: the final answer stars at k=1, or another hang it divided by 0!

    • @krisbrandenberger544
      @krisbrandenberger544 Před 2 lety +1

      Yes, and that means the inner sum will be = 0 if r is greater than or = 1, not just greater than 1.

    • @MohamedBenamer940
      @MohamedBenamer940 Před 5 měsíci

      e^x - 1 should start by 1 (m = 1)
      Because e^x = 1 + x + ½x².....
      => e^x - 1 = x¹ + ½x² + ⅙x³...
      Which explains m = 1

  • @mathmathician8250
    @mathmathician8250 Před 2 lety +16

    I first learned about bernoulli number at mathloger video. Thanks for the derivation of it!!!

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety +2

      Bernoulli numbers, as introduced by Bernoulli himself, were originally defined to be a factor of the coefficients in the expanded sum of the kth powers of the first n integers. One can quite easily derive the recursion formula from this using algebraic manipulation for the sum and a bit of calculus. Furthermore, one can derive an explicit sum for the kth bernoulli number from this very recursion. That sum can then be used to verify that generating function (first introduced by euler). As such, the actual derivation of these numbers is more complicated than whats shown. The bernoulli numbers are an amazingly versatile object that appears in calculating the values of the Riemann zeta-function (and other L-functions), in combinatorics they appear in the formula for stirling numbers of both kind (and for many other special numbers), in differential topology as part of the kervaire-milnor formula and much more. Truly an all purpose object of outstanding power.

  • @goodplacetostop2973
    @goodplacetostop2973 Před 2 lety +33

    16:42 That makes me think… is there a closed form for 1 + 4 + 27 + … + n^n ?

    • @Noam_.Menashe
      @Noam_.Menashe Před 2 lety +5

      I don't think so. I've spent a good half an hour looking/getting sidetracked.
      You can always approximate it.

    • @billbill1235
      @billbill1235 Před 2 lety +10

      I have been thinking about this for about 1 year. Not only I haven't managed to calculate it myself but I haven't even found anything on the internet about it. It is a very difficult and interesting problem. I asked my A-level teacher who has a math degree from Harvard but he had no idea too. It most likely hasn't been discovered yet...

    • @LinigDrake
      @LinigDrake Před 2 lety +6

      @@angelmendez-rivera351 Can you link it, please?

    • @JM-us3fr
      @JM-us3fr Před 2 lety +3

      I would just approximate it with the Euler-Maclauren formula. It can allow you to estimate the sum with an integral plus some correction terms

    • @LinigDrake
      @LinigDrake Před 2 lety +4

      @@angelmendez-rivera351 What I meant to say was: Would you be so kind as to link some research proving the absence of such a closed formula?

  • @Cicote
    @Cicote Před 2 lety +16

    The exponential operator exp(y*d/dx) at 8:50 resembles the translation operator T in quantum mechanics a lot.
    In quantum mechanics, the translation operator shifts the wave function ψ by a set distance x: ψ(r - x) = T(x)ψ(r). It is defined to be T(x) = exp(-i*x*p/ħ) with the position operator x and momentum operator p = -i*ħ*d/dx (and the reduced Planck constant ħ = h/2π). That is, T(x)=exp(-x*d/dx), which is exactly the operator in the video (with an additional minus), that also shifts the argument of the function f.

    • @APaleDot
      @APaleDot Před 2 lety +1

      This connection can be understood by defining an operator called the shift operator E, which is defined as shifting a function over by one unit, written like so: Ef(z) = f(z+1)
      Obviously this operator can be applied multiple times and it behaves as you'd expect E²f(z) = f(z + 2), and inverses and fractional powers also make sense, so in general (E^y)f(z) = f(z + y). The connection to translations in space should also be obvious.
      The Taylor series of any function f centered at a is given by ∑ (1 / k!) (d^n/dx^n) f(a) (x - a)^n for n between 0 and ∞. So, a Taylor series centered at z for a function after applying the shift operator would have a = z and x = z+1 which would be written as f(z+1) = ∑ (1 / k!) (d^n/dz^n) f(z) ((z+1) - z)^n. That final term equals 1, so we are left with ∑ (1 / k!) (d^n/dz^n) f(z).
      If you know your Taylor series you can clearly see that ∑ (1 / k!) (d^n/dz^n) is simply e^x but with x = d/dz which means that Ef(z) = (e^d/dz) f(z) and we get your general translation operator by doing E^y = (e^(d/dz))^y = e^(y * d/dz)

    • @user-en5vj6vr2u
      @user-en5vj6vr2u Před 2 lety

      Damn I hate when people start talking like textbooks in the comment section

    • @APaleDot
      @APaleDot Před 2 lety +1

      @@user-en5vj6vr2u
      Bruh, this is a math video. Expect math in the comment section.

    • @user-en5vj6vr2u
      @user-en5vj6vr2u Před 2 lety +1

      @@APaleDot please leave out yhe passive voice and the “obviously” and the “you can clearly see”

    • @APaleDot
      @APaleDot Před 2 lety

      @@user-en5vj6vr2u
      why?
      And what does that have to do with textbooks?

  • @bosorot
    @bosorot Před 2 lety +13

    If you find that the topic is too difficult . My suggestion is to watch Power sum video MASTER CLASS by mathologer . These videos complement each other.

  • @stefanomarchesani7684
    @stefanomarchesani7684 Před rokem +2

    I think the final formula has two big problems: the term in the summation does not make sense if k=0 or if k > p+1. The k>p+1 problem is clearly due to the fact that the function x^(p+1) has only the first p+1 derivatives nonvanishing. While the term k=0 does not work because of the simplifications made in writing the term p choose k-1 (which are not valid if k=0).

  • @PowerhouseCell
    @PowerhouseCell Před 2 lety +1

    Beautifully done! 💛

  • @Noam_.Menashe
    @Noam_.Menashe Před 2 lety +10

    12:38 this reminds me of the Euler-Maclaurin summation formula.
    It's possible to derive this sum of powers/Faulhaber's formula from it (Pretty trivially).
    You could make a video on it.

    • @ZedaZ80
      @ZedaZ80 Před 2 lety +2

      ♥️ I love Euler-Maclaurin
      I actually started from the 1^p+2^p+...+n^p in highschool (technically middle school, I still have the original notebook) and ended up deriving the Euler-Maclaurin formula and had a notebook with a page of constants. College came and one day I saw some very familiar fractions and turned out to be "Bernoulli numbers" and then after that I found out the name for the formula I came up with was the Euler-Maclaurin formula XD I had a LOT of fun with that and still do.

  • @richardheiville937
    @richardheiville937 Před 2 lety +3

    It's easy to make such formulas observe that f(n)=1^p+2^p+...+n^p is a polynomial in n of degree p+1 with rational coefficients thus one have to determine the coefficients to do that one has to solve a linear equations system.

  • @ostorjulien2562
    @ostorjulien2562 Před 2 lety +6

    The sum should not go to infinity, it should stop at p+1 since the kth derivative of x^(p+1) are zéro for k bigger than p+1

    • @deinauge7894
      @deinauge7894 Před 2 lety +2

      and p choose k-1 is also 0 for k>p+1. but that's not a problem. what is problematic is the k=0 term ;-)

  • @romajimamulo
    @romajimamulo Před 2 lety +3

    6:07 the r at the top of all of these sums should be r-1 as there's no m=0 term

  • @SirCalculator
    @SirCalculator Před rokem +1

    Take first 3 rows of pascals triangle and erase the rightmost 1's.
    1
    11 ----> 1
    121 12
    Insert these rows into a matrix as follows and by placing minus signs at every 2nd diagonal:
    ( 1 0 )
    ( -1 2)
    Invert this matrix to get
    ( 1 0 )
    ( 1/2 1/2 )
    Now you simultaneously get the first power sums by this matrix-vector equation:
    ( 1^0 + ... + n^0 ) (1 0) * ( n )
    ( 1^1 + ... + n^1 ) = (1/2 1/2) ( n^2 )
    This procedure is easily generalized to get all power sums up to any exponent p. The computational difficulty only arises in the task to invert bigger and bigger matrices, but the obvious benefit is that you get all formulas at once.
    Props to Mathologer and his video on power sums which is where I got this from

  • @dr.matthiaskruger8538
    @dr.matthiaskruger8538 Před 2 lety +2

    In which sense is the result a closed form? You started with a finite sum und ended by a sum...

  • @holyshit922
    @holyshit922 Před 2 lety +4

    I would use "Difference calculus"
    Instead of derifative we have difference operator
    which can be defined as follows
    Δ^{0}f(n) = f(n)
    Δ^{n}f(n) = Δ(Δ^{n-1}f(n+1) - Δ^{n-1}f(n))
    Instead of integral we have sum
    Indefinite sum Σ f(n)
    and definite sum Σ_{k=0}^{n} f(k)
    We have also summation by parts
    (Coincidence of names is accidental - I dont think so)
    Instead of powers we have falling factorials
    --------------------------------------------------------------------------------------------
    In my opinion to calculate b_{1} we need limit of first derivative at zero
    If generating function is rational then geometric series and its derivatives will be enough

  • @aweebthatlovesmath4220
    @aweebthatlovesmath4220 Před 2 lety +1

    Thank you so much Michael!!!
    Really want to know about bernoulli numbers.

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety +1

      Bernoulli numbers, as introduced by Bernoulli himself, were originally defined to be a factor of the coefficients in the expanded sum of the kth powers of the first n integers. One can quite easily derive the recursion formula from this using algebraic manipulation for the sum and a bit of calculus. Furthermore, one can derive an explicit sum for the kth bernoulli number from this very recursion. That sum can then be used to verify that generating function (first introduced by euler). As such, the actual derivation of these numbers is more complicated than whats shown. The bernoulli numbers are an amazingly versatile object that appears in calculating the values of the Riemann zeta-function (and other L-functions), in combinatorics they appear in the formula for stirling numbers of both kind (and for many other special numbers), in differential topology as part of the kervaire-milnor formula and much more. Truly an all purpose object of outstanding power.

  • @diniaadil6154
    @diniaadil6154 Před 2 lety +7

    Your formula for nth bernoulli number is wrong since the left taylor expansion( of e^x -1) starts from m=1. It is correct if the sum stops at r-1.

  • @QuantumHistorian
    @QuantumHistorian Před rokem +1

    I'm not sure that turning a finite sum over the integers into an infinite one over a bunch of fractional constants that have to be defined iteratively counts as solving the problem in anyway..

  • @CTJ2619
    @CTJ2619 Před rokem +1

    LIKED “A little detour into calculus”

  • @abrahammekonnen
    @abrahammekonnen Před 2 lety +1

    14:11 when you cancel the p+1's out you limit k to the natural numbers instead of the natural numbers AND 0 (which is a loss in information, but makes sense to do just wanted to make note of it).

  • @abrahammekonnen
    @abrahammekonnen Před 2 lety

    Wow. Yeah I'll be coming back to this video multiple times.

  • @a.osethkin55
    @a.osethkin55 Před rokem

    Thanks!

  • @Bennici
    @Bennici Před 2 lety +1

    I can tell you talked to someone teaching college physics by the way that one of your next videos you say something like "we're playing it fast and loose with the existence [of these objects], but everything will work out in the end".
    And so rigor dies a lonely death, ignored in a corner of an unlit closet. :P

  • @bhavikdesaichemobbc
    @bhavikdesaichemobbc Před 2 lety +1

    Amazing appraoch sir

  • @cbbuntz
    @cbbuntz Před rokem +1

    I just solve them with a matrix (I use the formulas quite a bit). The coefficients for the finite difference of x^p - (x-1) ^p yields binomial coefficients with the highest degree term removed, so the main diagonal should be 1, 2, 3, 4.... Those polynomials have an antidifference that is x^p, which is also useful. Fill the matrix with those coefficients and take the inverse, and boom, you've got your polynomial coefficients for power sums. With both matrices, you can map between derivatives and integrals and finite differences / antidifferences.
    Another useful property for those truncated binomial coefficients is that those coefficients are what comes from fitting a polynomial to a sequence and then evaluating the polynomial at the next term. A very cheap way of extrapolating. You don't need the whole matrix, a single vector does the job.

    • @davidblauyoutube
      @davidblauyoutube Před rokem

      Didn't think the umbral calculus would make an appearance here, interesting.

  • @wavyblade6810
    @wavyblade6810 Před 2 lety

    4:00 we can do this because both series converge and at least one converges absolutely - this is what Mertens theorem says.

  • @dracoinvictus
    @dracoinvictus Před rokem +1

    Me: I totally don't understand how you jumped from one equation to the next equation. Michael Penn: OK. Nice!

  • @chrisbarrington108
    @chrisbarrington108 Před 2 lety +2

    Hi Michael, in your last two lines I’m not sure the telescoping works as the b(k) will be different at each stage… Also, won’t it blow up at k=0… Maybe I’m missing something… Love your channel by the way

  • @renesperb
    @renesperb Před měsícem

    It is not hard to derive a fairly simple recursion for 1+2^p+3^p + ....n^p ,starting with the well known case p = 1.But of course you don't get a general formula .

  • @Kapomafioso
    @Kapomafioso Před 2 lety +5

    After watching this, I don't really get what you mean by a "closed form" anymore...
    You start from a finite sum and as a result you get...infinite sum? How is one considered closed form while the other isn't? To me, closed form is something you plug one number in and get another number out, without having to summing increasingly more terms (or infinite). So not only this is an infinite sum, for each term you have to find the k-th Bernoulli number, for which you have to employ another computation requiring a recursion or another sum...
    BTW I don't think that sum should go to infinity, term k = 0 is also weird, with p choose -1...
    If the final sum is finite, you trade a sum of n terms for a sum of p terms, albeit every needs another sum of increasingly more terms (to get those Bernoulli numbers)...

    • @EebstertheGreat
      @EebstertheGreat Před 2 lety +2

      The sum is finite because the higher order derivatives of a polynomial are all zero. In the final formula, he is using the convention that (a choose b) = 0 for a < b, so the sum is still technically finite. But yeah, he forgot to reindex the sum, it should start at 1, not 0 (not only is he choosing -1, but also dividing by 0). Personally, I found this exposition very confusing.

  • @daniellosh8341
    @daniellosh8341 Před rokem

    Operating on both sides sum(BnD^n/n!) could avoid division of (e^D - 1) or the existence(definition) of inverse of (e^D - 1)

  • @n8cantor
    @n8cantor Před 2 lety +5

    Love the idea that we found a "closed form" for a finite series of powers and it was an infinite series of powers (and Bernoulli numbers and binomial coefficients)
    For another derivation and application of the Bernoulli numbers, I highly recommend this video on the Basel Problem: czcams.com/video/nxJI4Uk4i00/video.html

    • @srijanraghunath4642
      @srijanraghunath4642 Před 2 lety +2

      Except it is a finite sum since the binomial coefficient in the sum contains a (p-(k-1))! in the denominator! The coefficient will go to 0 when p-(k-1)=-1,-2,-3,… since 1/(-1)!=1/(-2)!=1/(-3)!=…=0.
      And as for the link you posted, I think that the alternate derivation of the Euler maclaurin formula is much better mathematically than the rough operator arguments

  • @johnchessant3012
    @johnchessant3012 Před 2 lety +2

    The formula at 12:32 seems related to the Euler-Maclaurin formula.

    • @yavuz1779
      @yavuz1779 Před 2 lety

      Watch Mathologer's video on this subject

  • @FrankHarwald
    @FrankHarwald Před rokem

    For reference: The formula that gives a closed solution to 1^p + 2^p + 3^p + ... + n^p is also known as "Faulhaber's Formula".
    It's interesting but not particularly efficient when actually wanting to compute these sum for many cases as it requires precomputing a long recursive sequence in with GROWING order of recursion (number of previous steps required to compute the next one). So to compute the p-th power series, you'd have to apply the recursion p-steps, but each step will also require you to keep & use all previously computed p-steps anew resulting in an overall use of p^2 arithmetic operations to get to the p-th power series. Because of this, I've been wondering whether there is a more efficient or alternative way to derive or write the p-th power sum formula & I had been diggin into several archives of mathematical sites & archives & helas, while there are a few alternative ways of writing Faulhaber's formula, none I've found are computationally simpler than the one by Faulhaber - if not even more involved then his. The most concise way of writing the closed form of the p-th power series I've found involves taking the last row vector of the inverse of a p*p square matrix comprised of all the numbers of Pascal's triangle up to p-th order, row by row, fit into a right upper triangle, padded with 0 & shifted right by 1 column. It's not more efficient then Paulhaber's formula because computing the determinant of an arbitrary p*p square matrix requires at least p^2 operations, however, this isn't a fully arbitrary matrix, it's a near upper-right triangular matrix (just with one additional non-zero diagonal) & based off of special numbers (Pascal triangle, or binomial coefficients), so maybe there are better ways of computing the explicit polynomial for the p-th power series...

    • @nickruffmath
      @nickruffmath Před rokem

      You might be able to use the Euler-Maclaurin Formula to approximate the sum using an integral (since in this case the integral is usually simpler) and then going to as many terms as needed for the floating point accuracy of your algorithm

    • @nickruffmath
      @nickruffmath Před rokem

      Although maybe not because that formula also uses bernoulli numbers...
      A way I've done it by hand is by Difference Calculus (also known as, or at least closely related to, Umbral calculus) which is kind of like a discrete analog to calculus. It lets you recursively define the next sum in terms of the previous ones by way of expanding (n+1)^p - n^p which is the difference operator acting on n^p

  • @The1RandomFool
    @The1RandomFool Před 2 lety

    Another interesting place that Bernoulli numbers pop up is the closed form of the Riemann zeta function of 2n, where n is a natural number. It's related to the coefficients of the Laurent series expansion of cotangent, which is used in a complex analytical method of calculating zeta(2n). Also notice that the initial definition of Bernoulli numbers in the video bears resemblance to the integral form of the Riemann zeta function.
    Additionally, it can be shown that the Bernoulli numbers can be calculated with subsequent derivatives of x*cot(x) then evaluated at the limit as x approaches 0.

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety

      Not only 2n but also -n. One does not even need cotangent to obtain these results. They can be achieved by just using the properties of Bernoulli polynomials!

    • @The1RandomFool
      @The1RandomFool Před 2 lety

      @@eetulehtonen69 Ah, I have not done any work with Bernoulli polynomials, so I did not know that.

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety

      @@The1RandomFool that is not suprising since the proof came out in 2013 and even though elementary, its not easy nor necessary by any means. I happen to be an expert in this field so I know that bit of math trivia.

  • @_zelatrix
    @_zelatrix Před 11 měsíci +1

    How can the n-th Bernoulli number B_n be defined in terms of B_n itself? You need to calculate B_n to find B_n?

  • @fransisigos
    @fransisigos Před 2 lety +4

    I'm not sure an infinite sum is a closed-form expression though?

    • @kevinmartin7760
      @kevinmartin7760 Před 2 lety +8

      p choose k-1 becomes zero when k-1 exceeds p so for any particular p there are only a finite number of nonzero terms. But I see your point...

    • @Macieks300
      @Macieks300 Před 2 lety

      @@kevinmartin7760 Yes, that means that the formula is useful but only when n is way bigger than p because there are only p elements in the sum instead of n.

    • @guruone
      @guruone Před 2 lety

      RIGHT.... Playing fast and loose with definitions here...... Maybe if the sum converges and you can express it in terms of elementary functions, ay ;)

  • @holyshit922
    @holyshit922 Před 2 lety +1

    In my opinion you are wrong, you have wrong indexes in Cauchy product so
    binomial coefficients are wrong
    Should it be r+1 in the top of binomial coefficients ?

  • @zazinjozaza6193
    @zazinjozaza6193 Před 2 lety

    If you have the time, could you make more videos explaining where bernoulli numbers come from?

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety

      A japanese mathematician by the name of seki takakazu first introduced bernoulli numbers as the coefficients of terms in the expanded closed form of the sum of powers of integers. A few years later bernoulli had the same approach in his book ars conjectandi (1712). Bernoulli numbers look rather arbitrary, and its clear why there would be confusion. They are like term that glues together the faulhaber formula. In a sense, defined to be the arbitrary-looking sequence of numbers to make a coherent formula. The recursion shown in this video can be derived with elementary algebra and calculus. From that one can get a explicit double sum for the kth bernoulli number. Furthermore that double sum can be used to verify the generating function shown here.

  • @qschroed
    @qschroed Před rokem

    what type of mathematical tool could be used to properly justify the operator maths you did there?

  • @dawkinsfan660
    @dawkinsfan660 Před 7 měsíci

    Which is the crazy video on the sum of cubes? Could somebody put the link here, please?

  • @emmanuellaurens2132
    @emmanuellaurens2132 Před 2 lety +1

    1:52 I was very confused, and had to look it up on the web. You, sir, are in fact not defining the nth Bernoulli number as you claim, which would imply that n or at least bn is fixed in the equation even though it is the index of the sum, but are defining all the Bernoulli numbers at once.

    • @iabervon
      @iabervon Před 2 lety

      Yeah, although if you think of the RHS as a power series, that definition says that the nth Bernoulli number is the coefficient of x^n on the right times n!, which is complicated but doesn't depend on other Bernoulli numbers. Half the time, mathematicians think of power series as infinite tables of expressions for coefficients, rather than something you would actually evaluate, and equality is term-by-term.

  • @gregy1570
    @gregy1570 Před 2 lety

    please, if you haven't yet, could you make a video about the limit of nth root of n factorial? i can't remember what problem i was working on where it came up, but i thought it was interesting

    • @nikoladjuric9904
      @nikoladjuric9904 Před 2 lety

      By Stiirling approximation
      n!~(n/e)ⁿ*_/(2pi*n)
      ⁿ_/(n!)~(n/e)* ²ⁿ_/(2pi*n)
      As n getting bigger
      ²ⁿ_/(2pi*n) is close to 1
      So for big n you can approximate ⁿ_/(n!) as n/e .
      If n tends indinity then ⁿ_/(n!) Tends infinity also.
      Or
      ln(n!^(1/n))=sum(lnk)/n
      =sum(ln(k/n)+lnn)/n
      =sum(ln(k/n))/n+lnn
      As n tends infinity it tends
      ln(n)+S(lnx) as x goes 0 to 1
      =ln(n)+(x(lnx-1) from 0 to 1)
      =ln(n)-1=ln(n/e)
      So actually it's showed
      ⁿ_/(n!)~n/e for big n

    • @soupy5890
      @soupy5890 Před 3 měsíci

      Or you can compare n factorial to b^n for any positive b and note n! eventually outgrows it, and the proof follows by taking nth roots and noting nth root is an increasing function

  • @gosias6328
    @gosias6328 Před 6 měsíci

    Nice, but this final Formula, even after correcting for the k=0 (p choose -1 :) ) does not work for p=1, giving the wrong result 1/2 * n * (n+2)

  • @j.d.kurtzman7333
    @j.d.kurtzman7333 Před 2 lety

    7:00 don’t you need to know b0 ANd b1 (not just b1) given the blue boxed formula to calculate the rest of the Bernoulli numbers

    • @gibbogle
      @gibbogle Před 2 lety

      My question too. Apparently there is a mistake.

  • @erictrefeu5041
    @erictrefeu5041 Před rokem

    ...et pour p non entier ? can you find a closed form ? moi j'en ai une !

  • @user-dt4cg1pj9l
    @user-dt4cg1pj9l Před rokem

    Sum of all terms of series defined in first place is -1/2(e^x) not x/e,,

  • @daniellosh1015
    @daniellosh1015 Před rokem

    -1 within the summation can be dropped, property of Bernoulli numbers

  • @davidbrisbane7206
    @davidbrisbane7206 Před 2 lety +3

    This is the first time I felt that Michael packed too much into one video. This needs three videos to do the problem justice.

  • @bmdiscover7827
    @bmdiscover7827 Před rokem

    great calculus behavior great maths

  • @samueldevulder
    @samueldevulder Před 2 lety +1

    I don’t get why the final sum is a closed form. This is an infinite series. It is hard to believe that this infinite series equals n(n+1)/2 when p=2 for instance. Do the terms gets null or cancels at some point in the summation ? This needs clarification here IMHO.

    • @jedthorpe9535
      @jedthorpe9535 Před 2 lety

      The sum is actually finite - for k=p+2 and above the k-th derivative of f(x) is just 0, so the p choose k-1 term vanishes for these (if you like I suppose you could define n choose k to be 0 for k > n).

    • @samueldevulder
      @samueldevulder Před 2 lety

      @@jedthorpe9535 Oh, well indeed. Of course if p choose k is 0 for k>p. Many thanks.

    • @Kapomafioso
      @Kapomafioso Před 2 lety

      @@jedthorpe9535 But even if it's finite, the original sum is also finite. In what sense is trading a finite sum for another finite sum (more complicated I'd argue, since you have to also calculate the Bernoulli numbers), considered "closed form"?

    • @tracyh5751
      @tracyh5751 Před 2 lety

      @@Kapomafioso I wouldn't call it a closed form, but if p

  • @ecoidea100
    @ecoidea100 Před 2 lety

    So the operator just divides when applying the derivative at 11:33 because it does not depend on x?

    • @deinauge7894
      @deinauge7894 Před 2 lety

      yes, the two operators commute

    • @ecoidea100
      @ecoidea100 Před 2 lety

      @@deinauge7894 I can see the question is why

    • @deinauge7894
      @deinauge7894 Před 2 lety

      @@ecoidea100 as you said, it does not depend on x.
      d/dx commutes with itself (and with any function of itself), as every operator does

  • @MyOneFiftiethOfADollar

    Never would have a imagined the product of two infinite series could be written as product of an infinite and finite series

    • @eetulehtonen69
      @eetulehtonen69 Před 2 lety +2

      Its not a product, its a double sum. A sum that has sums as an element. It's called the cauchy product.

    • @MyOneFiftiethOfADollar
      @MyOneFiftiethOfADollar Před 2 lety

      @@eetulehtonen69 he expressed product of two infinite series as a Cauchy product

  • @gibbogle
    @gibbogle Před 2 lety

    What about b0?

  • @kingfrozen4257
    @kingfrozen4257 Před rokem +1

    ok so just replaced a sigma with another sigma ! whats the point?

  • @youtubeuser6581
    @youtubeuser6581 Před 2 lety

    Bernoulli numbers

  • @laretslucky2899
    @laretslucky2899 Před 2 lety

    Expressing a finite sum via infinite... I believe there could be some more ways to make it.

    • @ZedaZ80
      @ZedaZ80 Před 2 lety +1

      For natural numbers p, it is finite (after the (p+1)-th term it's all zeros).
      The formula is written as going to infinity because that is the generalization for non-integers :)

  • @khoozu7802
    @khoozu7802 Před 2 lety

    To prove the closed form in the end of the video
    Notice that n^(p+1) is the highest term of (n+1)^(p+1) and
    pC-1=p!/((-1)!(p+1)!)=1/((-1)!(p+1))
    Then we have k=0
    b0/0*pC-1*((n+1)^(p+1)-1)
    =1/0*1/(-1)!*n^(p+1)/(p+1)
    =1/(0*(-1)!)*n^(p+1)/(p+1)
    =1/0!*n^(p+1)/(p+1)
    =n^(p+1)/(p+1)+...

    • @khoozu7802
      @khoozu7802 Před 2 lety

      But it was wrong because we cannot have factorial of negative integers

    • @khoozu7802
      @khoozu7802 Před 2 lety

      and also cannot have something divided by zero

  • @pedroalonso7606
    @pedroalonso7606 Před 2 lety

    Faulhaber polynomials!

  • @pedrocasella1327
    @pedrocasella1327 Před 2 lety

    Challenge: prove that, for every p prime, there is a prime q bigger than p but less than 2p

  • @danielbranscombe6662
    @danielbranscombe6662 Před 2 lety

    ok, so you were able to calculate the value of B4, but can you now calculate the value of After lol

  • @detectiveandspynovels7140
    @detectiveandspynovels7140 Před 4 měsíci

    Wonder

  • @byronwatkins2565
    @byronwatkins2565 Před 2 lety +1

    I don't see why an infinite series is preferred to a finite series.

    • @byronwatkins2565
      @byronwatkins2565 Před 2 lety

      @@angelmendez-rivera351 You find it easier to add an infinity of terms instead of n terms? That is ridiculous.

    • @schweinmachtbree1013
      @schweinmachtbree1013 Před 2 lety

      @@byronwatkins2565 I know it seems counter-intuitive but it really is easier to evaluate infinite series. There are only a handful of finite series with closed forms, namely
      - arithmetic progressions: series like sum_{k=1}^n (a k),
      - geometric progressions: series like sum_{k=1}^n (a r^k),
      - arithmetico-geometric progressions: series like sum_{k=1}^n (a k r^k),
      - generalizations of these of the form sum_{k=1}^n (a k^m r^k),
      - certain sums of binomial coefficients,
      - certain very specific trigonometric sums, and
      - the power sums which are the topic of this video.
      In contrast there are many more infinite series with closed forms, and the reason this is not 'ridiculous' is that one doesn't add up infinitely many terms to evaluate an infinite series (which would of course be ridiculous); rather one has to apply some ingenuity (for example to evaluate sum_{k=1}^{infinity} r^k, if you call the series S then you can relate S and rS which lets you solve for S = r/(1-r), provided |r|

    • @byronwatkins2565
      @byronwatkins2565 Před 2 lety

      @@schweinmachtbree1013 If someone presented a closed-form equivalent, then I would agree that the closed-form equivalent is preferred.

  • @josephmartos
    @josephmartos Před 2 lety +1

    This is like hardcore sex in maths. Pls tell why and why it is safe yo set y=1.

  • @michaelguenther7105
    @michaelguenther7105 Před 2 lety

    Oliver Heaviside would approve!

  • @nittabhowmick8042
    @nittabhowmick8042 Před 2 lety +1

    Not so cool, it will be better that you express the sum as a finite sum

  • @bethhentges
    @bethhentges Před 2 lety

    0:50 Dr. Penn says “this guy” when he could have said “this expression.”
    6:15 Dr. Penn says, “We can think about this guy right here, b sub 1…” when he could have said, “We can think about b sub 1 right here…”
    9:12 Dr. Penn says, “If we were to expand this guy,” when he could have said, “If we were to expand y.”
    14:00 Dr. Penn says, “This guy right here,” when he could have said “This product right here.”
    16:22 Dr. Penn says, “I’ll let you guys,” when he could have said, “I’ll let you folks” or “I’ll let you viewers.”

    • @Kapomafioso
      @Kapomafioso Před 2 lety +2

      Your point being...?

    • @bethhentges
      @bethhentges Před 2 lety

      @@Kapomafioso
      There was a long discussion the other day regarding Dr. Penn’s unnecessary and probably unconscious use of “this guy” and how such usage subtly creates an environment where girls and women feel excluded which can lead them to select a different field of study.

    • @russellsharpe288
      @russellsharpe288 Před 2 lety +1

      @@bethhentges 'guy' is now a gender-neutral term. This is particularly obvious when it is being used to refer to algebraic expressions.

    • @bethhentges
      @bethhentges Před 2 lety

      @@russellsharpe288 Nope.

  • @user-hq7hi2sl2o
    @user-hq7hi2sl2o Před 2 lety +1

    asnwer= n 1 ! n1 ! hmm