SciPy Beginner's Guide for Optimization

Sdílet
Vložit
  • čas přidán 25. 07. 2024
  • Scipy.Optimize.Minimize is demonstrated for solving a nonlinear objective function subject to general inequality and equality constraints. Source code is available at apmonitor.com/che263/index.php...
    Correction: The "product" at 0:30 should be "summation". The code is correct.
  • Věda a technologie

Komentáře • 300

  • @pabloalbarran7572
    @pabloalbarran7572 Před 5 lety +5

    I've been looking a lot of your videos and they have help me a lot, thank you very much.

  • @markkustelski8113
    @markkustelski8113 Před 4 lety +12

    This is soooooo awesome. Thank you! I have been learning python for months....this is the first time that I have solved an optimization problem

    • @apm
      @apm  Před 4 lety

      I'm glad it helped. Here are additional tutorials and code: apmonitor.com/che263/index.php/Main/PythonOptimization You may also be interested in this online course: apmonitor.github.io/data_science/

  • @Krath1988
    @Krath1988 Před 3 lety +4

    You just saved me like a century trying to figure out how to unroll these summed values. Thanks!

    • @apm
      @apm  Před 3 lety

      Great to hear!

  • @sriharshakodati
    @sriharshakodati Před 4 lety +25

    Thanks for the short and clear explanation.
    Thanks for clearing the fear I have of equations and coding.
    Thanks for being so kind to reply to every query in the comments section .
    Thanks for designing the course and uploading the videos
    Thanks for helping many students like me.
    You increased my confidence and changed my perception..that I can do more. I wish you get what you deserve.
    You will be remembered :)
    Thank you, Sir!!!!

  • @first.samuel
    @first.samuel Před 7 lety +5

    Wow!Best Intro Ever..looking forward to more stuff like this

  • @daboude8332
    @daboude8332 Před 5 lety +2

    Thank you for your help. We have an assignment next week and the last question is exactly the same ! :)

    • @apm
      @apm  Před 5 lety

      I'm glad that it helped.

  • @SolvingOptimizationProblems

    Many thanks sir for such a great and clear instruction on solving LP in python.

  • @LittlePharma
    @LittlePharma Před 3 lety +2

    This video saved me a big headache, thank you!

  • @prabhatmishra5667
    @prabhatmishra5667 Před 2 lety +2

    I have no words to thank you, love from India

  • @MrSkizzzy
    @MrSkizzzy Před 6 lety +33

    This gave me hope for my thesis. Thank you sir :D

    • @anjalabotejue1730
      @anjalabotejue1730 Před 4 lety +1

      I wish I can have some of that for mine lol

    • @MrSkizzzy
      @MrSkizzzy Před 4 lety +8

      @@anjalabotejue1730 everything worked out in the end 😊
      Wish you much success

    • @TopAhmed1
      @TopAhmed1 Před 3 lety +4

      Last time I visited his channel for control theory lectures in my undergraduate, now I finished my MSc.

    • @MrSkizzzy
      @MrSkizzzy Před 3 lety +1

      @@TopAhmed1 Im so close haha 😁

  • @johnjung-studywithme
    @johnjung-studywithme Před rokem +1

    quick and to the point. Thank you!

  • @amirkahinpour547
    @amirkahinpour547 Před 6 lety +2

    Thank you veryy much for this. Awesome one

  • @user-mk3yl5fe4m
    @user-mk3yl5fe4m Před 2 lety

    I wish I would have found you 1-hour earlier. Thank you! So clear and helpful. Ahhhh

  • @AJ-et3vf
    @AJ-et3vf Před 2 lety +1

    Very helpful and educational sir. Thank you so much for your videos

  • @jiansenxmu
    @jiansenxmu Před 6 lety +2

    It’s very clear, thanks!

  • @galaxymariosuper
    @galaxymariosuper Před 3 lety +2

    absolutely amazing tutorial!

  • @mobi02
    @mobi02 Před rokem +1

    Thank you for clear and simple explanation

  • @JanMan37
    @JanMan37 Před 3 lety +1

    Thanks so much. So much clearer now!

  • @yudongzhang5324
    @yudongzhang5324 Před rokem

    Awesome Video! Very clear and helpful!!! Thanks!

  • @mahletalem
    @mahletalem Před 4 lety +1

    Thank you! This helped a lot.

  • @polcasacubertagil1967
    @polcasacubertagil1967 Před rokem +1

    Short and concise, thanks

  • @franciscoangel5975
    @franciscoangel5975 Před 7 lety +62

    At 0:39 I think you must write a "sumation" of terms squared sum x_i^2=40 not a product.

    • @apm
      @apm  Před 7 lety +17

      +Francisco Angel, you are correct. At 0:39 it is the summation of the squares that equals 40, not the product. The constraint above (x1*x2*x3*x4

    • @apm
      @apm  Před 7 lety

      ...the product constraint is >=25.

    • @franciscoangel5975
      @franciscoangel5975 Před 7 lety +2

      APMonitor.com
      yeah, but you write product of squared variables is equal to 40, and the contraint is sum of variables squared is 40, check time 0:39 at the video. You used big pi notation (product) instead big sigma (sumation).

    • @lazypunk794
      @lazypunk794 Před 5 lety +22

      He already admitted his mistake bruh

    • @tigrayrimey6418
      @tigrayrimey6418 Před 3 lety +1

      @@franciscoangel5975 But, he did it correctly in the implementation.

  • @jlearman
    @jlearman Před 5 lety

    Thank you, very helpful!

  • @constanceyang4886
    @constanceyang4886 Před 6 lety +1

    Great tutorial!

  • @AK-nw2fg
    @AK-nw2fg Před 3 měsíci

    It's great 👍
    Thanks for simple learning.

  • @hayat.tech.academy
    @hayat.tech.academy Před 2 lety +1

    Thank you for the explanation. There is a slip of tongue at 0:30 where you said the product. It's the summation of x_i.squared. Thanks

    • @apm
      @apm  Před 2 lety

      Thanks for catching that!

  • @LeeJohn206
    @LeeJohn206 Před 5 lety +1

    Awesome! thank you.

  • @sayanchakrabarti5080
    @sayanchakrabarti5080 Před 4 lety +2

    Thank you sir for this demo. Could you please let me know if there is any implementation of finding pareto optimal points for multi object optimization problems (eg two variable) using Scipy.

    • @apm
      @apm  Před 4 lety +2

      I don't have source code for this but there is a book chapter here: apmonitor.com/me575/index.php/Main/BookChapters (see Section 6.5)

  • @cristobalgarces1675
    @cristobalgarces1675 Před 2 lety +2

    Great video. Very helpful for the implementation of the code. I had a quick question though. How would one extract the design point history from each iteration? I ask because I have to do a tracking of the history as it converges to the minimizer. Thanks a ton!

    • @apm
      @apm  Před 2 lety

      You can set the maximum iterations to 1, 2, 3, 4... and then record the x values as it proceeds to the optimum. If you code the optimization algorithm yourself then you can track the progress as well: apmonitor.com/me575/index.php/Main/QuasiNewton

  • @jeolee2022
    @jeolee2022 Před 7 lety +2

    Thanks a lot.

  • @sepidehsa5707
    @sepidehsa5707 Před 6 lety

    How can we compute the errors associated to the fitted parameters?

  • @victorl.mercado5838
    @victorl.mercado5838 Před 7 lety +1

    Nice job!

  • @eso907
    @eso907 Před 5 lety +1

    I'm new to the optimization field and found the video very useful. Is the example mentioned in the video QCQP (quadratically constraint quadratic programming?)

    • @apm
      @apm  Před 5 lety

      The objective function isn't quadratic and neither is the product constraint so it would just be classified as a general nonlinear programming problem.

  • @aliguzel8463
    @aliguzel8463 Před 5 lety +1

    Great video..

  • @ilaydacolakoglu329
    @ilaydacolakoglu329 Před 3 lety +1

    Hi thanks for the video. For example like this problem which algorithm (genetic, simulated annealing, particle swarm) can we apply and how?

    • @apm
      @apm  Před 3 lety +1

      This one is solved with a gradient descent method. Other methods are discussed at apmonitor.com/me575

  • @ishgho
    @ishgho Před 4 lety +1

    Is there any way to keep some of the variables constant? Like those variables are in the objective function, but we don't want to solve for those values? Or is the only way is to keep them with same values for lb and ub in bounds

    • @apm
      @apm  Před 4 lety

      If they are constants then you can define them as floating point numbers that are not part of the solution vector. Otherwise, you can set the lb equal to the ub to force it not to change.

  • @user-gt2th3wz9c
    @user-gt2th3wz9c Před 4 lety +2

    Note for someone, who has less or equal inequality constraint. Docs: Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative.
    You need to multiply constraint by -1 to reverse the sign.

    • @apm
      @apm  Před 4 lety +1

      That is a very good explanation!

    • @zsa208
      @zsa208 Před rokem

      I was wondering the same. Thanks for the supplementary information.

  • @mohammed-taharlaraba1494
    @mohammed-taharlaraba1494 Před 3 lety +13

    Thx a lot for this tutorial! There is something puzzling me with the result though. The solution x you're getting is not a minimizer since the objective funtion evaluated at this point is 17, which is greater than the initial guess' objective 16. Why is that ?

    • @apm
      @apm  Před 3 lety +17

      The initial guess wasn't a feasible solution because it violated a constraint. The final solution is the best objective that also satisfies the constraints. Great question!

    • @24Ship
      @24Ship Před 2 lety

      i had the same question, thanks for asking

  • @m1a2tank
    @m1a2tank Před 6 lety

    can you get solution of "maximum ellipse on certain polygon?" using optimize library?
    for example A(x-x0)^2 + B(y-y0)^2 = 1 st. set of linear constraints which describe polygon on X-Y space.
    And I need to get best x0,y0 which minimize A*B

    • @apm
      @apm  Před 6 lety

      Here is a discussion thread in the APMonitor support group that is similar to your question: groups.google.com/d/msg/apmonitor/MET1rx3Cr4s/iQN9kiPlBgAJ I also recommend this link as a starting point: apmonitor.com/me575/index.php/Main/CircleChallenge

  • @fajarariefpermatanatha2584
    @fajarariefpermatanatha2584 Před 8 měsíci +1

    thank you sir, god bless you

  • @Mohammad12765
    @Mohammad12765 Před 4 lety +1

    Awesome, thanks

  • @pallavbakshi612
    @pallavbakshi612 Před 6 lety +1

    Great! Thanks.

  • @AmusiclouderH
    @AmusiclouderH Před 5 lety +1

    Hello APMonitor, thank you for making such wonderfull videos. I have a little problem, i tried adapting the optimization with Nested List but scipy doesnt seem to understand the structure so changed my model to Numpy array which it understands but gives me the error : "indexing a scalar" error. If u can show me how to implement the optimization using arrays that would be really helpfull.

    • @apm
      @apm  Před 5 lety

      Check out the Python GEKKO package. You can use list comprehensions to implement arrays. apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization

  • @luigiares3444
    @luigiares3444 Před 3 lety +1

    what happens if there are multiple solutions? I mean there is more than 1 set of x1, x2, x3, x4 for solution...are they all stock into the final array (which will become multi dimentional)? thanks

    • @apm
      @apm  Před 3 lety

      The solver reports only the local minimum. Try different initial guesses to get other local minima.

  • @paulb5044
    @paulb5044 Před 6 lety +1

    Hello Jhon. I am wondering if you have any material related to python and OR. I am learning myself OR for one side and python for another side. I am wondering if there is any book in which I can combine them. Do you have any material that I can access to or course? Looking forward to hearing from you. Paul.

    • @apm
      @apm  Před 6 lety

      Sorry, no Operations Research specific material but I do have a couple courses that may be of interest to you including engineering optimization and dynamic optimization. apm.byu.edu

    • @paulb5044
      @paulb5044 Před 6 lety +1

      APMonitor.com Thank you so much. I will check it. Regards Paúl

  • @ghostwhowalks5623
    @ghostwhowalks5623 Před 4 lety +1

    Fantastic video. What is the best way to add bounds for multiple variables; say more than 24 (optimizing across time, 12 months)? I definitely don't want to list them all......perhaps loop through the variable matrix....but hopefully a more efficient way? Thanks!

    • @apm
      @apm  Před 4 lety

      Scipy.optimize.minimize is going to struggle with a problem that size. I'd recommend Python gekko for your problem, especially if you have differential and algebraic equations.
      apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization

    • @ghostwhowalks5623
      @ghostwhowalks5623 Před 4 lety +1

      @@apm - yeah I've been looking at that! I'll dig in further....I hope there's an easier way to specify bounds for all variables...vs. literally listing them as we see in most of the examples. I'll def be exploring Gekko Thanks for the very prompt response!

  • @stakhanov.
    @stakhanov. Před 6 měsíci +1

    hi, why is the min objective with the initial guess numbers lower than the min objective at the end? shouldn't the solver return the lowest value of the function?

    • @rrc
      @rrc Před 6 měsíci

      The initial guess doesn't satisfy the constraints so it isn't a potential solution. The final objective is higher because the solver finds the best combination of decision variables that are the minimum objective and satisfy the constraints.

  • @adamhartman8361
    @adamhartman8361 Před 2 lety +1

    Thanks for this very approachable intro to using Python to do optimization. What book(s) can you recommend specifically for Python and optimization?

    • @apm
      @apm  Před 2 lety

      Here is a free online book on optimization: apmonitor.com/me575/index.php/Main/BookChapters with an accompanying online course.

  • @ps27ps27
    @ps27ps27 Před 3 lety +1

    Thanks for the video! Does it matter if the constraint is < or > for the method? Should it be in the standard form like for LP?

    • @apm
      @apm  Před 3 lety

      I think it is just the way I showed in the video. You can always reverse the sign of the inequality with a negative sign or there may be an option in the solver to switch it.

  • @saeedbaig4249
    @saeedbaig4249 Před 7 lety +1

    Great video; it helped me a lot.
    Just wondering tho, if u wanted your bounds to be, say, STRICTLY greater than 1 and STRICTLY less than 5 (i.e. 1

    • @rrc
      @rrc Před 7 lety +1

      Saeed Baig, the numbers 4.9999999 and 5.0000000 are treated almost equivalently when we use continuous values versus integer or discrete variables. That means that

    • @saeedbaig4249
      @saeedbaig4249 Před 7 lety +1

      O ok. Thanks for the quick reply!

  • @Pruthvikajaykumar
    @Pruthvikajaykumar Před 2 lety +1

    In the end fun value turned out to be greater than the initial guess? Is it optimized? What am I missing?

    • @apm
      @apm  Před 2 lety +1

      The initial guess is infeasible. It doesn't satisfy the constraints.

  • @amanjangid6375
    @amanjangid6375 Před 4 lety +1

    Thanks for the video, but how do l do if my objective is maximizing

    • @apm
      @apm  Před 4 lety +3

      You can multiply your objective by -1 to switch from minimizing to maximizing.

  • @djenaihielhani5677
    @djenaihielhani5677 Před 5 lety

    is the minimization function (function) based on the descent gradient?

    • @apm
      @apm  Před 5 lety

      The optimizer will work better if you have a continuous objective function and gradients. The Scipy.optimize.minimize package obtains gradients through finite difference. It calculates the gradients to determine the next trial point.

  • @danielmarshal2532
    @danielmarshal2532 Před 3 lety +3

    Hi, what i noticed, if you put the x0 you`ll get 16 and if you use the `minimize` function, it will give you an array that result around 17. But isn`t that mean our goal with `minimize` function hasn`t been fulfilled yet?

    • @apm
      @apm  Před 3 lety +5

      The initial guess violates the constraints so it isn't a feasible solution. The optimal solution is the minimum of all the feasible solutions.

  • @C2toC4
    @C2toC4 Před 4 lety +1

    Why do you use x1, x2, x3, x4 in returning the objective function but then switch to using x[0], x[1], x[2], x[3] in the constraint functions?
    I'm not trying to be pedantic, but it would help me if it was consistent either one way or the other, unless there's a reason that they're like that?
    e.g. could the constraint1(x) function be written as return x1 * x2 * x3 * x4 - 25 instead? (purely to be consistent)
    and/or could you write constraint1 in the same way you did for constraint2: using a for loop?
    e.g.
    def constraint1(x):
    product = 1
    for i in range(4):
    product = product * x[i]
    return product - 25
    or instead write constraint2(x) as you did with the objective function return (and as I did with constraint1(x) above): using x1, x2, x3, x4 instead of x[i]. This way it is consistent and may be easier for people who aren't good coders or mathematicians to follow along perhaps..
    (I apologise if these are bad questions/suggestions - I am not a programmer, just trying to learn, and in this case understand why you wrote things out in 3 different ways instead of 1..)
    Thank you kindly for any help in understanding (and of course for the video itself - cheers)!

    • @apm
      @apm  Před 4 lety

      Thanks for the helpful suggestions. I used x1, x2, etc because that is how the mathematicians write the optimization problem. Python begins with index 0 so I needed to create an array for solving the problem so x[0] maps to x1. I can see how this is very confusing and I'll try to avoid this difference in future videos.

  • @fffppp8762
    @fffppp8762 Před 7 lety +1

    Thanks

  • @forecenterforcustomermanag7715

    Easy and lucid style

  • @alexandrudavid2103
    @alexandrudavid2103 Před 7 lety

    Hi, do you have any idea why i'm getting this:
    "C:\Python34\lib\site-packages\scipy\optimize\slsqp.py:341: RuntimeWarning: invalid value encountered in greater bnderr = where(bnds[:, 0] > bnds[:, 1])[0]"? I'm getting the result, but i'm afraid this error is messing up the results.
    Edit: seems it has something to do with the infinte bound. I read somewhere that to specify a infinite bound you had to input None(ex: b = (0,None))

    • @apm
      @apm  Před 7 lety

      +Alexandru David, I had this same problem with infinite bounds and 'None'. I get around this by specifying a large upper or lower bound such as 1e10 or -1e10.

  • @gretabaumann6231
    @gretabaumann6231 Před 4 lety +1

    Is there a way to use scipy.optimize.minimize with only integer values?

    • @apm
      @apm  Před 4 lety

      No, it only optimizes with continuous values. Here is a solution with a mixed integer solver (see GEKKO solutions): apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables

  • @javonfair
    @javonfair Před 4 lety +1

    What's the difference between the "ineq' and 'eq' designations on the constraints?

    • @apm
      @apm  Před 4 lety +1

      They are inequality (x+y

  • @sriharshakodati
    @sriharshakodati Před 4 lety +1

    Great video. May I know if the minimize function is basically finding the root of x1x4(x1+x2+x3)+x3=0 equation?
    This would help a lot. Sorry for dumb question. Thanks!

    • @apm
      @apm  Před 4 lety +1

      I think the lowest that it can find the objective value is around 17 so it would be x1x4(x1+x2+x3)+x3=17. The constraints prevent it from going zero or into a negative region. The lowest value for each variable is 1 so the minimum that it could be with no other constraints is 1(3)+1=4.

    • @sriharshakodati
      @sriharshakodati Před 4 lety +1

      ​@@apm Thanks for the reply. Got it. Can we give a bunch of non linear equations to minimize function, and get the roots, given the only constraint is the roots being positive?
      Thanks!

    • @sriharshakodati
      @sriharshakodati Před 4 lety

      I think we should have 2 constraints in this case.
      One is to make sure the function doesn't end up being negative.
      Other is to make sure the roots are not negative.
      @APMonitor.com any thoughts would be helpful. Thanks!

    • @apm
      @apm  Před 4 lety +1

      @@sriharshakodati yes that is possible.

  • @danielj.rodriguez8621
    @danielj.rodriguez8621 Před 2 lety +1

    Thank you, Prof. Hedengren. Noting that this video was posted five years ago, do you think in 2022 there is still a case for using SciPy for optimization tasks instead of using GEKKO?
    In particular, GEKKO’s MINLP does not have an equivalent within SciPy. Is there any optimization task where SciPy would still be required instead of using GEKKO?

    • @apm
      @apm  Před 2 lety +1

      Yes, solvers such as scipy.optimize.minimize are very useful when the equations are black-box, meaning that there are function evaluations but there is no access to get the equations for automatic differentiation.

    • @danielj.rodriguez8621
      @danielj.rodriguez8621 Před 2 lety

      @@apm Thank for that valuable tip. Currently dealing with in-libe multiphase flow measurement (MPFM) calculations, in particular, trying to optimize the system configuration per competing criteria (e.g. minimize measurement uncertainty, minimize acceptable pressure loss ( both OPEX concerns), and minimize COGS (CAPEX and Gross Margin concerns). MPFM systems quickly result in de-facto black box objective functions.

  • @anuragsharma1953
    @anuragsharma1953 Před 5 lety

    Hello @APMonitor, thanks for this video. I have one doubt. I am trying to use SciPy.Optimize with Pandas dataframe. I want to minimize my target columns based on other columns. Can you please tell how to do that?

    • @apm
      @apm  Před 5 lety +1

      The objective function always needs to have input parameters and return a scalar objective. You can do that with Pandas but you may need to condense your target minimized column with something like np.sum(my_pandas_df['my_column']) or np.sum(my_pandas_df['my_column']**2).

    • @anuragsharma1953
      @anuragsharma1953 Před 5 lety

      Thanks. I tried the same thing.

    • @anuragsharma1953
      @anuragsharma1953 Před 5 lety

      But I am stuck with one problem. There are some constraints in my problem. (i.e. Sum of few variables must be equal to a certain value) but The scipy optimize function is distributing them almost equally.

  • @BiscuitZombies
    @BiscuitZombies Před 4 lety +1

    How would you do this for integers only?

    • @apm
      @apm  Před 4 lety

      Here is information on solving binary, integer, and special ordered sets in Python: apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables

  • @isaacpemberthy507
    @isaacpemberthy507 Před 5 lety +1

    Hello ApMonitor, great video, a question, how much is the capacity of Scipy? has it a restriction about number of variables or constraints? to solve, for example, the Vehicle routing problem, has it capacity?

    • @apm
      @apm  Před 5 lety

      You can typically do 50-100 variables and constraints with Scipy.optimize.minimize and potentially larger if the problem is mostly linear. If you need something much larger (100000+ variables) then check out Python Gekko (gekko.readthedocs.io/en/latest/).

    • @apm
      @apm  Před 5 lety

      There is no restriction with Scipy but the problem may take a long time. Here is a different scale-up analysis on ODEINT: czcams.com/video/8kx6vC9gTLo/video.html

  • @muhammadsarimmehdi
    @muhammadsarimmehdi Před 5 lety

    This is for just two constraints. What if we have N constraints and, due to the way the problem is posed, we cannot use matrices. How do we use a generic constraint function N times in this case?

    • @apm
      @apm  Před 5 lety

      You would add them as additional constraints in the tuple (separated with commas) that you give as an input to the optimizer.

  • @ivanmerino3692
    @ivanmerino3692 Před 4 lety +1

    I have a question. If we change the guess, for instance 1 for all values, the script gives also 1 values as results for the variables and that´s totally wrong. So what could be the problem here?

    • @apm
      @apm  Před 4 lety +1

      Scipy.minimize.optimize isn't the best solver in Python. Here are some other options: scicomp.stackexchange.com/questions/83/is-there-a-high-quality-nonlinear-programming-solver-for-python You may also want to try Gekko: apmonitor.com/che263/index.php/Main/PythonOptimization (see Method #3)

    • @ivanmerino3692
      @ivanmerino3692 Před 4 lety

      @@apm thanks! Very useful

  • @mingyan8081
    @mingyan8081 Před 7 lety +36

    Does the initial set of x minimize the function?
    the result of your code shows the f(x)=17.01..., which is larger than 16
    why is that?

    • @rrc
      @rrc Před 7 lety +32

      Ming Yan, good observation! If you check the initial guess, you'll see that the equality constraint (=40) is not satisfied so it isn't a feasible candidate solution. There is a good contour plot visualization of this problem here: apmonitor.com/pdc/index.php/Main/NonlinearProgramming

    • @mingyan8081
      @mingyan8081 Před 7 lety +6

      thank you for your explanation and references. I will be more careful next time.

    • @rrc
      @rrc Před 7 lety +16

      Ming Yan, I think this was a great question because many others have the same concern and your question will likely help others too.

    • @YawoviDodziMotchon
      @YawoviDodziMotchon Před 5 lety +2

      definitely

    • @saicharangarrepalli9590
      @saicharangarrepalli9590 Před 3 lety

      I was wondering the same. Thanks for this.

  • @LL-lb7ur
    @LL-lb7ur Před 5 lety

    Thank you very much for your tutorial. Does Scipy minimize function require data?

    • @apm
      @apm  Před 5 lety +1

      You can use Scipy minimize without data. In this tutorial, there is no data. There is only an objective function and two constraints.

    • @LL-lb7ur
      @LL-lb7ur Před 5 lety

      Many thanks!

  • @manoocgegr1364
    @manoocgegr1364 Před 6 lety

    hello
    thank you for your video
    I am wondering how it is possible to solve this problem without initial guess. I tried to remove initial guess but, looks like, initial guess should be provided as an argument to minimize(). Any advise will be highly appreciated

    • @apm
      @apm  Před 6 lety +1

      +Manoochehr Akhlaghinia, an initial starting point for the optimizer is always required. If you don't have a good initial guess then I'd recommend starting with a vector of 1s or 0s.

    • @manoocgegr1364
      @manoocgegr1364 Před 6 lety

      APMonitor.com thank you so much for your reply. Looks like if initial guess is not good then optimizer can not converge to an acceptable solution. For instance, the example you covered in your video does not throw a right answer with [0,0,0,0] and [1,1,1,1].
      I have immigrated from MATLAB to Python. In MATLAB, Genetic Algorithm can solve any constrained bounded optimization problem even with no initial guess. So robust.
      I have also noticed there is Differential Evolution in python which optimizes without initial guess but it cant work with constraints.
      Im wondering if there is no such tool in Python at all (constrained bounded optimizer with no initial guess) and I better stop searching for it and focus on finding a robust algorithm to provide initial guess for current Python optimizers. Any advise will be highly appreciated.

    • @apm
      @apm  Před 6 lety

      +Manoochehr Akhlaghinia, there may be a GA package for Python such as pypi.python.org/pypi/deap but I haven't tried it. I don't know of a package that doesn't require initial guesses. You could also try simulated annealing to generate initial guesses such as apmonitor.com/me575/index.php/Main/SimulatedAnnealing

    • @manoocgegr1364
      @manoocgegr1364 Před 6 lety

      Thank you. Really appreciate it

  • @jamesang7861
    @jamesang7861 Před 3 lety +1

    can we use Scipy.Optimize.Minimize for integer LP?

    • @apm
      @apm  Před 3 lety +1

      No, it doesn't do integer variables. Try Gekko: apmonitor.com/wiki/index.php/Main/IntegerProgramming

  • @davidtorgesen2037
    @davidtorgesen2037 Před 5 lety

    I have a question. I attempted to maximize the function but couldn't get it to work right. What do I need to change? Thank you for your help!
    This is what I did:
    def objective(x, sign=1.0):
    x1 = x[0]
    x2 = x[1]
    x3 = x[2]
    x4 = x[3]
    return sign*(x1*x4*(x1+x2+x3)+x3)
    def constraint1(x):
    return x[0]*x[1]*x[2]*x[3]-25.0
    def constraint2(x):
    sum_sq = 40
    for i in range(4):
    sum_sq = sum_sq - x[i]**2
    return sum_sq
    sol2 = minimize(objective,x0,args=(-1.0,),method='SLSQP',\
    bounds=bnds,constraints=cons)
    But it gave me this (it went beyond constraint1)
    print(sol2)
    fun: -134.733824524366
    jac: array([-45.75387001, -16.64193916, -17.64193916, -36.49635124])
    message: 'Optimization terminated successfully.'
    nfev: 68
    nit: 11
    njev: 11
    status: 0
    success: True
    x: array([4.56763313, 1.66137361, 1.76120447, 3.64344948])
    print(objective(sol2.x,1))
    print(constraint1(sol2.x))
    print(constraint2(sol2.x))
    134.733824524366
    23.69462805071327
    -1.0361844715589541e-10

    • @apm
      @apm  Před 5 lety

      You are missing a few lines. You may also want to check out the Gekko solution and syntax: apmonitor.com/che263/index.php/Main/PythonOptimization Here is a solution to that problem with scipy.optimize.minimize:
      import numpy as np
      from scipy.optimize import minimize
      def objective(x):
      return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
      def constraint1(x):
      return x[0]*x[1]*x[2]*x[3]-25.0
      def constraint2(x):
      sum_eq = 40.0
      for i in range(4):
      sum_eq = sum_eq - x[i]**2
      return sum_eq
      # initial guesses
      n = 4
      x0 = np.zeros(n)
      x0[0] = 1.0
      x0[1] = 5.0
      x0[2] = 5.0
      x0[3] = 1.0
      # show initial objective
      print('Initial Objective: ' + str(objective(x0)))
      # optimize
      b = (1.0,5.0)
      bnds = (b, b, b, b)
      con1 = {'type': 'ineq', 'fun': constraint1}
      con2 = {'type': 'eq', 'fun': constraint2}
      cons = ([con1,con2])
      solution = minimize(objective,x0,method='SLSQP',\
      bounds=bnds,constraints=cons)
      x = solution.x
      # show final objective
      print('Final Objective: ' + str(objective(x)))
      # print solution
      print('Solution')
      print('x1 = ' + str(x[0]))
      print('x2 = ' + str(x[1]))
      print('x3 = ' + str(x[2]))
      print('x4 = ' + str(x[3]))

  • @MrStudent1978
    @MrStudent1978 Před 5 lety

    Sir, can you please demonstrate trust region algorithm implementation?

    • @apm
      @apm  Před 5 lety

      Here is some relevant information on the trust region approach: neos-guide.org/content/trust-region-methods

  • @mukunthans3441
    @mukunthans3441 Před 3 lety +2

    Sir is there any advantages or disadvantages in using existing methodologies such as artificial neural networks, fuzzy logic and genetic algorithms to optimization of a linear objective function, subject to linear equality and linear inequality constraints over the Scipy method?
    I think Scipy method will provide accurate results in short time compared to other methods, since complexity of the linear problem is less.

    • @apm
      @apm  Před 3 lety

      Gradient based methods like scipy find a local solution and are typically much faster than the other methods.

    • @mukunthans3441
      @mukunthans3441 Před 3 lety

      @@apm Thankyou sir

  • @azad8upt
    @azad8upt Před 6 lety

    How can we implement "strict less than" constraint?

    • @apm
      @apm  Před 6 lety

      Because you have continuous variables, the

  • @amitshiuly3125
    @amitshiuly3125 Před 4 lety +1

    can any one help me to carry out multi objective optimisation using PYTHON?

    • @apm
      @apm  Před 4 lety

      Here is some help: apmonitor.com/do/index.php/Main/MultiObjectiveOptimization There is also more information in the optimization course: apmonitor.com/me575

  • @prof.goutamdutta4346
    @prof.goutamdutta4346 Před 4 lety +1

    How does Scripy compare with AMPL, GAMS , AIIMS ?

    • @apm
      @apm  Před 4 lety

      Scipy doesn't provide exact derivatives with automatic differentiation while the others do provide gradients for faster solutions. A modeling language can also help with common modeling constructs. A couple alternatives for Python are pyomo and Gekko. There is a gekko tutorial and comparison at apmonitor.com/che263/index.php/Main/PythonOptimization Additional gekko tutorials are here: apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization pyomo and gekko are both freely available while the others that you listed have a licensing fee. pip install gekko

  • @Thimiosath13
    @Thimiosath13 Před 5 lety

    Nice!!! Any example for Multi-Objective Linear Programming? ???

    • @apm
      @apm  Před 5 lety

      Here's some linear programming examples apmonitor.com/me575/index.php/Main/LinearProgramming multiple objectives are typically combined into one objective function. If you can't combine them then you may need to create a Pareto Frontier. More information on this is available at the optimization class website.

  • @cramermoraes
    @cramermoraes Před 4 lety +1

    Método dos mínimos quadrados esse!

  • @vigneshmani2636
    @vigneshmani2636 Před 5 lety

    Can you explain use of this as well so it will be useful for beginners like me to understand

    • @apm
      @apm  Před 5 lety

      Sure, there is additional information at apmonitor.com/che263/index.php/Main/PythonOptimization - you may also want to look at Python GEKKO.

  • @jayashreebehera3197
    @jayashreebehera3197 Před 4 lety +3

    But how how come the minimization gave an objective function higher than before. You see initially your guess gave this to be equal to 16. After minimization, it was 17.014....... Shouldnt minimization minimize the objective function and return the points that give a value, even smaller than that of the guess?

    • @apm
      @apm  Před 4 lety +5

      The initial guess didn't satisfy the inequality and equality constraints. It had a better objective function but it was infeasible.

    • @mohammedraza100
      @mohammedraza100 Před 3 lety +1

      Thanks I was wondering the same.

  • @ivankontra3446
    @ivankontra3446 Před 4 lety +1

    what if it was strictly larger than 25, what would you type then

    • @apm
      @apm  Před 4 lety

      It is the same if you are dealing with continuous variables. The value 25.00000000 and 25.00000001 are considered to be the same if the solver tolerance is 1e-8.

  • @easonshih4818
    @easonshih4818 Před 5 lety

    at 4:15, you said to move everything to the right side for the second constraint, but what if I move everything to the left side, then write the program according to that.
    I did that, and the answer is different from yours. whys that...?
    x = [1,5,5,1]
    constraint2(x) returns -12 that is what your program is doing
    but with mine, it return 12

    • @apm
      @apm  Před 5 lety

      If it is an equality constraint then it doesn't matter which side you put it on. If your constraint returns 12 then it appears that you are using the solver initial guess and not the optimized solution. For the optimized solution, it should return 0. Please see apmonitor.com/che263/index.php/Main/PythonOptimization for additional source code and examples.

  • @SatishAnnigeri
    @SatishAnnigeri Před 3 lety +1

    Am I getting this right? Objective function was 16 with the initial values (1,5,5,1) was 16 and after the final solution it was 17.0141. Am I interpreting something incorrectly?

    • @apm
      @apm  Před 3 lety +1

      The constraints aren't satisfied with the initial guess so it isn't a feasible solution.

    • @SatishAnnigeri
      @SatishAnnigeri Před 3 lety +1

      @@apm Oh yes, that is right. I was only looking at the bounds and not at the equality constraint. Sorry, my mistake. Thanks for the tutorial and the reply.

  • @viniciusbotelho9574
    @viniciusbotelho9574 Před 5 lety

    which are Lagrange multipliers at the solution?

    • @apm
      @apm  Před 5 lety

      I don't see those reported, at least in these examples: docs.scipy.org/doc/scipy/reference/tutorial/optimize.html They are reported in APMonitor or GEKKO as the file apm_lam.txt when option m.options.diaglevel >=2.

  • @alfajarnugraha5741
    @alfajarnugraha5741 Před 3 lety +1

    Sir i have alot of questions about optimize with python, and i have some model that i shuld to finish. So can i get your contact sir?

    • @apm
      @apm  Před 3 lety

      Unfortunately I can't help with the many specific questions each week and the personal requests for support. Here are some resources: apmonitor.com/me575

  • @marcosmetalmind
    @marcosmetalmind Před 4 lety +1

    APMonitor.com How did you choose your starting shot for x0?

    • @apm
      @apm  Před 4 lety +2

      It is just an initial guess. For convex problems, the solution will always be the same if it converges. For non-convex problems, the initial guess can lead to a different solution.

    • @marcosmetalmind
      @marcosmetalmind Před 4 lety

      @@apm thanks !

  • @user-ou6nw4wm4h
    @user-ou6nw4wm4h Před rokem +1

    Why the optimized value 17.014 is even larger than the initial guess, 16?

    • @apm
      @apm  Před rokem

      The constraints are not satisfied with the initial guess. The initial guess is infeasible.

  • @lizizhu1843
    @lizizhu1843 Před 4 lety +1

    Could you point out how to possibly write a set of thousands of constraints using matrices and get them ready for scipy.optimize.minimize? In real world, the number of variables can be easily thousands. I'm dealing with a portfolio construction problem and I'm faced with thousands of variables. Thank you!

    • @apm
      @apm  Před 4 lety

      Scipy.optimize.minimize is efficient for problems up to a hundred variables. If you have more then you probably want to try optimization software that is built for large-scale problems. Here are arrays in Python Gekko: stackoverflow.com/questions/52944970/how-to-use-arrays-in-gekko-optimizer-for-python The gradient based optimizers can solve problems with 1,000,000+ variables.

    • @lizizhu1843
      @lizizhu1843 Před 4 lety +1

      @@apm Thank you so much! I just found out that your website actually has a whole section/department devoted to Gekko Python, which can deal with large scale non-linear optimization. I'll look into it and come back with some feedback.

  • @akshaysunil2852
    @akshaysunil2852 Před 4 lety +1

    Sir im getting a few errors how can i contact?

    • @apm
      @apm  Před 4 lety

      You can put your questions here in the comments. Unfortunately, I can't help with individual, private requests for support.

  • @10a3asd
    @10a3asd Před 3 lety +5

    I got lost around 2:00 and now I'm mot sure where or even who I am anymore.

    • @apm
      @apm  Před 3 lety

      Yup, I can see how that is confusing. I hope you can kick your amnesia.

  • @tigrayrimey6418
    @tigrayrimey6418 Před 3 lety +1

    Sir, I wrote some comments here in response to the recent materials you recommended me to read. but, seem omitted? Am I right?

    • @apm
      @apm  Před 3 lety +1

      I think you repeated your same question 3 times in the comments. CZcams may have deleted them if it thought the repeated comments were someone spamming the discussion thread.

    • @tigrayrimey6418
      @tigrayrimey6418 Před 3 lety

      @@apm okay, my case is different. I will repost it. I only mention the term that Prof. and maybe private issues if not possible @YT, I don't know. But, it is not an uncommon practice to express an appreciation for an individual.

  • @abcde1261
    @abcde1261 Před 6 lety +1

    Thanks! But what about maximum? Can you explain?

    • @apm
      @apm  Před 6 lety +3

      Any maximization problem can be transformed into a minimization problem by multiplying the objective function by negative one. Most solvers require a minimized objective function.

    • @abcde1261
      @abcde1261 Před 6 lety

      APMonitor.com thanks for help, I understand it yet )

  • @hassane_azzi
    @hassane_azzi Před 4 lety +1

    Good !!

  • @KULDEEPSINGH-li6gv
    @KULDEEPSINGH-li6gv Před 4 lety +1

    File "main.py", line 22
    sol=minimize(objective,x0,method='SLSQP',\ bounds=bnds,constraints=cons)
    ^
    SyntaxError: unexpected character after line continuation character , now am getting this in last line, help me to get out of this

    • @apm
      @apm  Před 4 lety

      Just get rid of the continuation character "\" if you don't put the remaining part on the next line.

  • @learn5371
    @learn5371 Před 2 lety +1

    My constraint has to be >= 0 type?

    • @apm
      @apm  Před 2 lety

      It can be any type of inequality but you can convert to g(x)>0 by subtracting the right hand side or multiplying by -1 to flip the sign. Python Gekko can accept any type of inequality. See methods 2 and 3: apmonitor.com/che263/index.php/Main/PythonOptimization

  • @houdalmayahi3538
    @houdalmayahi3538 Před 4 lety +1

    Hi, does anyone know how the SLSQP work? What is the math behind it? How does it pick one solution out of an infinite number of solutions? I'd really appreciate it. I couldn't find an explanation of the method in google. Thanks

    • @apm
      @apm  Před 4 lety +1

      Here is help on quasi-Newton methods: apmonitor.com/me575/index.php/Main/QuasiNewton This doesn't cover the SLSQP (Sequential Least SQuares Programming) method but it should give you an idea of the types of how local approximations and search directions are calculated.

    • @houdalmayahi3538
      @houdalmayahi3538 Před 4 lety +1

      @@apm Thank you, professor! Do you think that the SLSQP has the same approach as in the Quasi-Newton methods? In other words, would the SLSQP and the Quasi-Newton methods give similar solutions?

    • @apm
      @apm  Před 4 lety +1

      @@houdalmayahi3538 yes, they should all give the same solution if they satisfy the Karush-Kuhn-Tucker conditions (converge to a local solution): apmonitor.com/me575/index.php/Main/KuhnTucker

    • @houdalmayahi3538
      @houdalmayahi3538 Před 4 lety

      ​@@apm Thank you so much!

  • @jingyiwang4931
    @jingyiwang4931 Před 5 lety

    if x[i] is a large array x[100] for example, do we need to write down all the numbers in the array for the initial guess x0???

    • @jingyiwang4931
      @jingyiwang4931 Před 5 lety

      in the video is x0=[1,5,5,1], if the array is very large, what to do with the initial​ guess?

    • @apm
      @apm  Před 5 lety +1

      You could just set the initial guess to zero:
      x0 = np.zeros(0)
      or any other number such as (2):x0 = np.ones(4) * 2.0
      I recommend putting the initial guess internal to the constraints. You can initialize to any value, especially if you don't have a good idea of the initial guess.

    • @jingyiwang4931
      @jingyiwang4931 Před 5 lety

      Thank you, professor. But does the value of initial value have an influence on the optimal result we search for? if not what's the point of setting an initial value?

    • @apm
      @apm  Před 5 lety

      For convex optimization if there is no problem setting any initial guess because a local Optimizer should find the solution. However some problems have multiple local Minima and the initial guess is very important. Also for some problems that are convex the non-linearity is a problem for the solver and it may not find a solution without a good guess.

    • @jingyiwang4931
      @jingyiwang4931 Před 5 lety

      APMonitor.com Thank you very much!!!Really help a lot!

  • @pythonscienceanddatascienc4351

    Hello!
    I studied your program a lot in Python and a question arose. If by chance, sum_sq = 400.0 and not 40, constraint2 is not satisfied because constraint2 ~ 300.024 and
    should be 0.
    Why does the program continue to provide a solution for x1 up to x4 if constraint 2 is not met?
    Please, I would like your help to interpret this.
    Thanks, Luciana from Brazil.

    • @apm
      @apm  Před 3 lety +1

      You need to check the solution status. The solver may have reported that it couldn't find a solution. You can see this by printing the result.success Boolean flag.
      res = minimize( )
      print(res.success)
      The optimization result is represented as a OptimizeResult object. Important attributes are: "x" the solution array, "success" a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination.

    • @pythonscienceanddatascienc4351
      @pythonscienceanddatascienc4351 Před 3 lety +1

      ​@@apm first, thanks by your so fast answer.
      I followed your directions and used the function you sent me. However, even placing the value of sum_sq = 400.0 in constraint2, res.success = True.
      I didn't understand why the answer is True if it should be False.
      I wrote the following code:
      res = minimize(objective, x0, method='SLSQP', tol=1e-6,constraints=cons)
      print(res.success)
      I'm sorry to have to question you again, but I really need your help.
      Greetings from Brazil.
      Luciana

    • @apm
      @apm  Před 3 lety +1

      @@pythonscienceanddatascienc4351 Try this (if gives False):
      import numpy as np
      from scipy.optimize import minimize
      def objective(x):
      return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
      def constraint1(x):
      return x[0]*x[1]*x[2]*x[3]-25.0
      def constraint2(x):
      sum_eq = 400.0
      for i in range(4):
      sum_eq = sum_eq - x[i]**2
      return sum_eq
      # initial guesses
      n = 4
      x0 = np.zeros(n)
      x0[0] = 1.0
      x0[1] = 5.0
      x0[2] = 5.0
      x0[3] = 1.0
      # show initial objective
      print('Initial SSE Objective: ' + str(objective(x0)))
      # optimize
      b = (1.0,5.0)
      bnds = (b, b, b, b)
      con1 = {'type': 'ineq', 'fun': constraint1}
      con2 = {'type': 'eq', 'fun': constraint2}
      cons = ([con1,con2])
      solution = minimize(objective,x0,method='SLSQP',\
      bounds=bnds,constraints=cons)
      x = solution.x
      print(solution.success)
      # show final objective
      print('Final SSE Objective: ' + str(objective(x)))
      # print solution
      print('Solution')
      print('x1 = ' + str(x[0]))
      print('x2 = ' + str(x[1]))
      print('x3 = ' + str(x[2]))
      print('x4 = ' + str(x[3]))

    • @pythonscienceanddatascienc4351
      @pythonscienceanddatascienc4351 Před 3 lety +1

      @@apm
      I tested it now and it worked. Thank you, so much!!!
      However, I realized that I did not enter the boundary conditions in the minimize command. I forgot.
      Just a question of interpretation: why is it True if I put the bondary conditions and False if I don't (only with the restrictions, would it already give False in both cases)?
      Thanks again.

    • @apm
      @apm  Před 3 lety +1

      @@pythonscienceanddatascienc4351 your solution may be unbounded (variables go to infinity) if you don't include the boundary conditions.

  • @tracywang1
    @tracywang1 Před 5 měsíci

    Question:
    x1*x2*x3*x4 >= 25 why we take 25 to the left not all the x to the right?
    x1^2+x2^2+x3^2+x4^2 = 40 why we take all the x to the right not 40 to the left?

    • @rrc
      @rrc Před 5 měsíci +1

      It just needs to be in the form f(x)>0 for inequality constraints and f(x)=0 for equality constraints. You could go to either side for the equality constraint. For the inequality constraint, you could also go to either side but then just multiply by -1 to reverse the sign to get f(x)>0.