SciPy Beginner's Guide for Optimization
Vložit
- čas přidán 25. 07. 2024
- Scipy.Optimize.Minimize is demonstrated for solving a nonlinear objective function subject to general inequality and equality constraints. Source code is available at apmonitor.com/che263/index.php...
Correction: The "product" at 0:30 should be "summation". The code is correct. - Věda a technologie
I've been looking a lot of your videos and they have help me a lot, thank you very much.
This is soooooo awesome. Thank you! I have been learning python for months....this is the first time that I have solved an optimization problem
I'm glad it helped. Here are additional tutorials and code: apmonitor.com/che263/index.php/Main/PythonOptimization You may also be interested in this online course: apmonitor.github.io/data_science/
You just saved me like a century trying to figure out how to unroll these summed values. Thanks!
Great to hear!
Thanks for the short and clear explanation.
Thanks for clearing the fear I have of equations and coding.
Thanks for being so kind to reply to every query in the comments section .
Thanks for designing the course and uploading the videos
Thanks for helping many students like me.
You increased my confidence and changed my perception..that I can do more. I wish you get what you deserve.
You will be remembered :)
Thank you, Sir!!!!
Wow!Best Intro Ever..looking forward to more stuff like this
Thank you for your help. We have an assignment next week and the last question is exactly the same ! :)
I'm glad that it helped.
Many thanks sir for such a great and clear instruction on solving LP in python.
This video saved me a big headache, thank you!
I have no words to thank you, love from India
This gave me hope for my thesis. Thank you sir :D
I wish I can have some of that for mine lol
@@anjalabotejue1730 everything worked out in the end 😊
Wish you much success
Last time I visited his channel for control theory lectures in my undergraduate, now I finished my MSc.
@@TopAhmed1 Im so close haha 😁
quick and to the point. Thank you!
Thank you veryy much for this. Awesome one
I wish I would have found you 1-hour earlier. Thank you! So clear and helpful. Ahhhh
Very helpful and educational sir. Thank you so much for your videos
It’s very clear, thanks!
absolutely amazing tutorial!
Thank you for clear and simple explanation
Thanks so much. So much clearer now!
Awesome Video! Very clear and helpful!!! Thanks!
Thank you! This helped a lot.
Short and concise, thanks
At 0:39 I think you must write a "sumation" of terms squared sum x_i^2=40 not a product.
+Francisco Angel, you are correct. At 0:39 it is the summation of the squares that equals 40, not the product. The constraint above (x1*x2*x3*x4
...the product constraint is >=25.
APMonitor.com
yeah, but you write product of squared variables is equal to 40, and the contraint is sum of variables squared is 40, check time 0:39 at the video. You used big pi notation (product) instead big sigma (sumation).
He already admitted his mistake bruh
@@franciscoangel5975 But, he did it correctly in the implementation.
Thank you, very helpful!
Great tutorial!
It's great 👍
Thanks for simple learning.
Thank you for the explanation. There is a slip of tongue at 0:30 where you said the product. It's the summation of x_i.squared. Thanks
Thanks for catching that!
Awesome! thank you.
Thank you sir for this demo. Could you please let me know if there is any implementation of finding pareto optimal points for multi object optimization problems (eg two variable) using Scipy.
I don't have source code for this but there is a book chapter here: apmonitor.com/me575/index.php/Main/BookChapters (see Section 6.5)
Great video. Very helpful for the implementation of the code. I had a quick question though. How would one extract the design point history from each iteration? I ask because I have to do a tracking of the history as it converges to the minimizer. Thanks a ton!
You can set the maximum iterations to 1, 2, 3, 4... and then record the x values as it proceeds to the optimum. If you code the optimization algorithm yourself then you can track the progress as well: apmonitor.com/me575/index.php/Main/QuasiNewton
Thanks a lot.
How can we compute the errors associated to the fitted parameters?
Nice job!
I'm new to the optimization field and found the video very useful. Is the example mentioned in the video QCQP (quadratically constraint quadratic programming?)
The objective function isn't quadratic and neither is the product constraint so it would just be classified as a general nonlinear programming problem.
Great video..
Hi thanks for the video. For example like this problem which algorithm (genetic, simulated annealing, particle swarm) can we apply and how?
This one is solved with a gradient descent method. Other methods are discussed at apmonitor.com/me575
Is there any way to keep some of the variables constant? Like those variables are in the objective function, but we don't want to solve for those values? Or is the only way is to keep them with same values for lb and ub in bounds
If they are constants then you can define them as floating point numbers that are not part of the solution vector. Otherwise, you can set the lb equal to the ub to force it not to change.
Note for someone, who has less or equal inequality constraint. Docs: Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative.
You need to multiply constraint by -1 to reverse the sign.
That is a very good explanation!
I was wondering the same. Thanks for the supplementary information.
Thx a lot for this tutorial! There is something puzzling me with the result though. The solution x you're getting is not a minimizer since the objective funtion evaluated at this point is 17, which is greater than the initial guess' objective 16. Why is that ?
The initial guess wasn't a feasible solution because it violated a constraint. The final solution is the best objective that also satisfies the constraints. Great question!
i had the same question, thanks for asking
can you get solution of "maximum ellipse on certain polygon?" using optimize library?
for example A(x-x0)^2 + B(y-y0)^2 = 1 st. set of linear constraints which describe polygon on X-Y space.
And I need to get best x0,y0 which minimize A*B
Here is a discussion thread in the APMonitor support group that is similar to your question: groups.google.com/d/msg/apmonitor/MET1rx3Cr4s/iQN9kiPlBgAJ I also recommend this link as a starting point: apmonitor.com/me575/index.php/Main/CircleChallenge
thank you sir, god bless you
Awesome, thanks
Great! Thanks.
Hello APMonitor, thank you for making such wonderfull videos. I have a little problem, i tried adapting the optimization with Nested List but scipy doesnt seem to understand the structure so changed my model to Numpy array which it understands but gives me the error : "indexing a scalar" error. If u can show me how to implement the optimization using arrays that would be really helpfull.
Check out the Python GEKKO package. You can use list comprehensions to implement arrays. apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization
what happens if there are multiple solutions? I mean there is more than 1 set of x1, x2, x3, x4 for solution...are they all stock into the final array (which will become multi dimentional)? thanks
The solver reports only the local minimum. Try different initial guesses to get other local minima.
Hello Jhon. I am wondering if you have any material related to python and OR. I am learning myself OR for one side and python for another side. I am wondering if there is any book in which I can combine them. Do you have any material that I can access to or course? Looking forward to hearing from you. Paul.
Sorry, no Operations Research specific material but I do have a couple courses that may be of interest to you including engineering optimization and dynamic optimization. apm.byu.edu
APMonitor.com Thank you so much. I will check it. Regards Paúl
Fantastic video. What is the best way to add bounds for multiple variables; say more than 24 (optimizing across time, 12 months)? I definitely don't want to list them all......perhaps loop through the variable matrix....but hopefully a more efficient way? Thanks!
Scipy.optimize.minimize is going to struggle with a problem that size. I'd recommend Python gekko for your problem, especially if you have differential and algebraic equations.
apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization
@@apm - yeah I've been looking at that! I'll dig in further....I hope there's an easier way to specify bounds for all variables...vs. literally listing them as we see in most of the examples. I'll def be exploring Gekko Thanks for the very prompt response!
hi, why is the min objective with the initial guess numbers lower than the min objective at the end? shouldn't the solver return the lowest value of the function?
The initial guess doesn't satisfy the constraints so it isn't a potential solution. The final objective is higher because the solver finds the best combination of decision variables that are the minimum objective and satisfy the constraints.
Thanks for this very approachable intro to using Python to do optimization. What book(s) can you recommend specifically for Python and optimization?
Here is a free online book on optimization: apmonitor.com/me575/index.php/Main/BookChapters with an accompanying online course.
Thanks for the video! Does it matter if the constraint is < or > for the method? Should it be in the standard form like for LP?
I think it is just the way I showed in the video. You can always reverse the sign of the inequality with a negative sign or there may be an option in the solver to switch it.
Great video; it helped me a lot.
Just wondering tho, if u wanted your bounds to be, say, STRICTLY greater than 1 and STRICTLY less than 5 (i.e. 1
Saeed Baig, the numbers 4.9999999 and 5.0000000 are treated almost equivalently when we use continuous values versus integer or discrete variables. That means that
O ok. Thanks for the quick reply!
In the end fun value turned out to be greater than the initial guess? Is it optimized? What am I missing?
The initial guess is infeasible. It doesn't satisfy the constraints.
Thanks for the video, but how do l do if my objective is maximizing
You can multiply your objective by -1 to switch from minimizing to maximizing.
is the minimization function (function) based on the descent gradient?
The optimizer will work better if you have a continuous objective function and gradients. The Scipy.optimize.minimize package obtains gradients through finite difference. It calculates the gradients to determine the next trial point.
Hi, what i noticed, if you put the x0 you`ll get 16 and if you use the `minimize` function, it will give you an array that result around 17. But isn`t that mean our goal with `minimize` function hasn`t been fulfilled yet?
The initial guess violates the constraints so it isn't a feasible solution. The optimal solution is the minimum of all the feasible solutions.
Why do you use x1, x2, x3, x4 in returning the objective function but then switch to using x[0], x[1], x[2], x[3] in the constraint functions?
I'm not trying to be pedantic, but it would help me if it was consistent either one way or the other, unless there's a reason that they're like that?
e.g. could the constraint1(x) function be written as return x1 * x2 * x3 * x4 - 25 instead? (purely to be consistent)
and/or could you write constraint1 in the same way you did for constraint2: using a for loop?
e.g.
def constraint1(x):
product = 1
for i in range(4):
product = product * x[i]
return product - 25
or instead write constraint2(x) as you did with the objective function return (and as I did with constraint1(x) above): using x1, x2, x3, x4 instead of x[i]. This way it is consistent and may be easier for people who aren't good coders or mathematicians to follow along perhaps..
(I apologise if these are bad questions/suggestions - I am not a programmer, just trying to learn, and in this case understand why you wrote things out in 3 different ways instead of 1..)
Thank you kindly for any help in understanding (and of course for the video itself - cheers)!
Thanks for the helpful suggestions. I used x1, x2, etc because that is how the mathematicians write the optimization problem. Python begins with index 0 so I needed to create an array for solving the problem so x[0] maps to x1. I can see how this is very confusing and I'll try to avoid this difference in future videos.
Thanks
Easy and lucid style
Hi, do you have any idea why i'm getting this:
"C:\Python34\lib\site-packages\scipy\optimize\slsqp.py:341: RuntimeWarning: invalid value encountered in greater bnderr = where(bnds[:, 0] > bnds[:, 1])[0]"? I'm getting the result, but i'm afraid this error is messing up the results.
Edit: seems it has something to do with the infinte bound. I read somewhere that to specify a infinite bound you had to input None(ex: b = (0,None))
+Alexandru David, I had this same problem with infinite bounds and 'None'. I get around this by specifying a large upper or lower bound such as 1e10 or -1e10.
Is there a way to use scipy.optimize.minimize with only integer values?
No, it only optimizes with continuous values. Here is a solution with a mixed integer solver (see GEKKO solutions): apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables
What's the difference between the "ineq' and 'eq' designations on the constraints?
They are inequality (x+y
Great video. May I know if the minimize function is basically finding the root of x1x4(x1+x2+x3)+x3=0 equation?
This would help a lot. Sorry for dumb question. Thanks!
I think the lowest that it can find the objective value is around 17 so it would be x1x4(x1+x2+x3)+x3=17. The constraints prevent it from going zero or into a negative region. The lowest value for each variable is 1 so the minimum that it could be with no other constraints is 1(3)+1=4.
@@apm Thanks for the reply. Got it. Can we give a bunch of non linear equations to minimize function, and get the roots, given the only constraint is the roots being positive?
Thanks!
I think we should have 2 constraints in this case.
One is to make sure the function doesn't end up being negative.
Other is to make sure the roots are not negative.
@APMonitor.com any thoughts would be helpful. Thanks!
@@sriharshakodati yes that is possible.
Thank you, Prof. Hedengren. Noting that this video was posted five years ago, do you think in 2022 there is still a case for using SciPy for optimization tasks instead of using GEKKO?
In particular, GEKKO’s MINLP does not have an equivalent within SciPy. Is there any optimization task where SciPy would still be required instead of using GEKKO?
Yes, solvers such as scipy.optimize.minimize are very useful when the equations are black-box, meaning that there are function evaluations but there is no access to get the equations for automatic differentiation.
@@apm Thank for that valuable tip. Currently dealing with in-libe multiphase flow measurement (MPFM) calculations, in particular, trying to optimize the system configuration per competing criteria (e.g. minimize measurement uncertainty, minimize acceptable pressure loss ( both OPEX concerns), and minimize COGS (CAPEX and Gross Margin concerns). MPFM systems quickly result in de-facto black box objective functions.
Hello @APMonitor, thanks for this video. I have one doubt. I am trying to use SciPy.Optimize with Pandas dataframe. I want to minimize my target columns based on other columns. Can you please tell how to do that?
The objective function always needs to have input parameters and return a scalar objective. You can do that with Pandas but you may need to condense your target minimized column with something like np.sum(my_pandas_df['my_column']) or np.sum(my_pandas_df['my_column']**2).
Thanks. I tried the same thing.
But I am stuck with one problem. There are some constraints in my problem. (i.e. Sum of few variables must be equal to a certain value) but The scipy optimize function is distributing them almost equally.
How would you do this for integers only?
Here is information on solving binary, integer, and special ordered sets in Python: apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables
Hello ApMonitor, great video, a question, how much is the capacity of Scipy? has it a restriction about number of variables or constraints? to solve, for example, the Vehicle routing problem, has it capacity?
You can typically do 50-100 variables and constraints with Scipy.optimize.minimize and potentially larger if the problem is mostly linear. If you need something much larger (100000+ variables) then check out Python Gekko (gekko.readthedocs.io/en/latest/).
There is no restriction with Scipy but the problem may take a long time. Here is a different scale-up analysis on ODEINT: czcams.com/video/8kx6vC9gTLo/video.html
This is for just two constraints. What if we have N constraints and, due to the way the problem is posed, we cannot use matrices. How do we use a generic constraint function N times in this case?
You would add them as additional constraints in the tuple (separated with commas) that you give as an input to the optimizer.
I have a question. If we change the guess, for instance 1 for all values, the script gives also 1 values as results for the variables and that´s totally wrong. So what could be the problem here?
Scipy.minimize.optimize isn't the best solver in Python. Here are some other options: scicomp.stackexchange.com/questions/83/is-there-a-high-quality-nonlinear-programming-solver-for-python You may also want to try Gekko: apmonitor.com/che263/index.php/Main/PythonOptimization (see Method #3)
@@apm thanks! Very useful
Does the initial set of x minimize the function?
the result of your code shows the f(x)=17.01..., which is larger than 16
why is that?
Ming Yan, good observation! If you check the initial guess, you'll see that the equality constraint (=40) is not satisfied so it isn't a feasible candidate solution. There is a good contour plot visualization of this problem here: apmonitor.com/pdc/index.php/Main/NonlinearProgramming
thank you for your explanation and references. I will be more careful next time.
Ming Yan, I think this was a great question because many others have the same concern and your question will likely help others too.
definitely
I was wondering the same. Thanks for this.
Thank you very much for your tutorial. Does Scipy minimize function require data?
You can use Scipy minimize without data. In this tutorial, there is no data. There is only an objective function and two constraints.
Many thanks!
hello
thank you for your video
I am wondering how it is possible to solve this problem without initial guess. I tried to remove initial guess but, looks like, initial guess should be provided as an argument to minimize(). Any advise will be highly appreciated
+Manoochehr Akhlaghinia, an initial starting point for the optimizer is always required. If you don't have a good initial guess then I'd recommend starting with a vector of 1s or 0s.
APMonitor.com thank you so much for your reply. Looks like if initial guess is not good then optimizer can not converge to an acceptable solution. For instance, the example you covered in your video does not throw a right answer with [0,0,0,0] and [1,1,1,1].
I have immigrated from MATLAB to Python. In MATLAB, Genetic Algorithm can solve any constrained bounded optimization problem even with no initial guess. So robust.
I have also noticed there is Differential Evolution in python which optimizes without initial guess but it cant work with constraints.
Im wondering if there is no such tool in Python at all (constrained bounded optimizer with no initial guess) and I better stop searching for it and focus on finding a robust algorithm to provide initial guess for current Python optimizers. Any advise will be highly appreciated.
+Manoochehr Akhlaghinia, there may be a GA package for Python such as pypi.python.org/pypi/deap but I haven't tried it. I don't know of a package that doesn't require initial guesses. You could also try simulated annealing to generate initial guesses such as apmonitor.com/me575/index.php/Main/SimulatedAnnealing
Thank you. Really appreciate it
can we use Scipy.Optimize.Minimize for integer LP?
No, it doesn't do integer variables. Try Gekko: apmonitor.com/wiki/index.php/Main/IntegerProgramming
I have a question. I attempted to maximize the function but couldn't get it to work right. What do I need to change? Thank you for your help!
This is what I did:
def objective(x, sign=1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
x4 = x[3]
return sign*(x1*x4*(x1+x2+x3)+x3)
def constraint1(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def constraint2(x):
sum_sq = 40
for i in range(4):
sum_sq = sum_sq - x[i]**2
return sum_sq
sol2 = minimize(objective,x0,args=(-1.0,),method='SLSQP',\
bounds=bnds,constraints=cons)
But it gave me this (it went beyond constraint1)
print(sol2)
fun: -134.733824524366
jac: array([-45.75387001, -16.64193916, -17.64193916, -36.49635124])
message: 'Optimization terminated successfully.'
nfev: 68
nit: 11
njev: 11
status: 0
success: True
x: array([4.56763313, 1.66137361, 1.76120447, 3.64344948])
print(objective(sol2.x,1))
print(constraint1(sol2.x))
print(constraint2(sol2.x))
134.733824524366
23.69462805071327
-1.0361844715589541e-10
You are missing a few lines. You may also want to check out the Gekko solution and syntax: apmonitor.com/che263/index.php/Main/PythonOptimization Here is a solution to that problem with scipy.optimize.minimize:
import numpy as np
from scipy.optimize import minimize
def objective(x):
return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
def constraint1(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def constraint2(x):
sum_eq = 40.0
for i in range(4):
sum_eq = sum_eq - x[i]**2
return sum_eq
# initial guesses
n = 4
x0 = np.zeros(n)
x0[0] = 1.0
x0[1] = 5.0
x0[2] = 5.0
x0[3] = 1.0
# show initial objective
print('Initial Objective: ' + str(objective(x0)))
# optimize
b = (1.0,5.0)
bnds = (b, b, b, b)
con1 = {'type': 'ineq', 'fun': constraint1}
con2 = {'type': 'eq', 'fun': constraint2}
cons = ([con1,con2])
solution = minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
x = solution.x
# show final objective
print('Final Objective: ' + str(objective(x)))
# print solution
print('Solution')
print('x1 = ' + str(x[0]))
print('x2 = ' + str(x[1]))
print('x3 = ' + str(x[2]))
print('x4 = ' + str(x[3]))
Sir, can you please demonstrate trust region algorithm implementation?
Here is some relevant information on the trust region approach: neos-guide.org/content/trust-region-methods
Sir is there any advantages or disadvantages in using existing methodologies such as artificial neural networks, fuzzy logic and genetic algorithms to optimization of a linear objective function, subject to linear equality and linear inequality constraints over the Scipy method?
I think Scipy method will provide accurate results in short time compared to other methods, since complexity of the linear problem is less.
Gradient based methods like scipy find a local solution and are typically much faster than the other methods.
@@apm Thankyou sir
How can we implement "strict less than" constraint?
Because you have continuous variables, the
can any one help me to carry out multi objective optimisation using PYTHON?
Here is some help: apmonitor.com/do/index.php/Main/MultiObjectiveOptimization There is also more information in the optimization course: apmonitor.com/me575
How does Scripy compare with AMPL, GAMS , AIIMS ?
Scipy doesn't provide exact derivatives with automatic differentiation while the others do provide gradients for faster solutions. A modeling language can also help with common modeling constructs. A couple alternatives for Python are pyomo and Gekko. There is a gekko tutorial and comparison at apmonitor.com/che263/index.php/Main/PythonOptimization Additional gekko tutorials are here: apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization pyomo and gekko are both freely available while the others that you listed have a licensing fee. pip install gekko
Nice!!! Any example for Multi-Objective Linear Programming? ???
Here's some linear programming examples apmonitor.com/me575/index.php/Main/LinearProgramming multiple objectives are typically combined into one objective function. If you can't combine them then you may need to create a Pareto Frontier. More information on this is available at the optimization class website.
Método dos mínimos quadrados esse!
Can you explain use of this as well so it will be useful for beginners like me to understand
Sure, there is additional information at apmonitor.com/che263/index.php/Main/PythonOptimization - you may also want to look at Python GEKKO.
But how how come the minimization gave an objective function higher than before. You see initially your guess gave this to be equal to 16. After minimization, it was 17.014....... Shouldnt minimization minimize the objective function and return the points that give a value, even smaller than that of the guess?
The initial guess didn't satisfy the inequality and equality constraints. It had a better objective function but it was infeasible.
Thanks I was wondering the same.
what if it was strictly larger than 25, what would you type then
It is the same if you are dealing with continuous variables. The value 25.00000000 and 25.00000001 are considered to be the same if the solver tolerance is 1e-8.
at 4:15, you said to move everything to the right side for the second constraint, but what if I move everything to the left side, then write the program according to that.
I did that, and the answer is different from yours. whys that...?
x = [1,5,5,1]
constraint2(x) returns -12 that is what your program is doing
but with mine, it return 12
If it is an equality constraint then it doesn't matter which side you put it on. If your constraint returns 12 then it appears that you are using the solver initial guess and not the optimized solution. For the optimized solution, it should return 0. Please see apmonitor.com/che263/index.php/Main/PythonOptimization for additional source code and examples.
Am I getting this right? Objective function was 16 with the initial values (1,5,5,1) was 16 and after the final solution it was 17.0141. Am I interpreting something incorrectly?
The constraints aren't satisfied with the initial guess so it isn't a feasible solution.
@@apm Oh yes, that is right. I was only looking at the bounds and not at the equality constraint. Sorry, my mistake. Thanks for the tutorial and the reply.
which are Lagrange multipliers at the solution?
I don't see those reported, at least in these examples: docs.scipy.org/doc/scipy/reference/tutorial/optimize.html They are reported in APMonitor or GEKKO as the file apm_lam.txt when option m.options.diaglevel >=2.
Sir i have alot of questions about optimize with python, and i have some model that i shuld to finish. So can i get your contact sir?
Unfortunately I can't help with the many specific questions each week and the personal requests for support. Here are some resources: apmonitor.com/me575
APMonitor.com How did you choose your starting shot for x0?
It is just an initial guess. For convex problems, the solution will always be the same if it converges. For non-convex problems, the initial guess can lead to a different solution.
@@apm thanks !
Why the optimized value 17.014 is even larger than the initial guess, 16?
The constraints are not satisfied with the initial guess. The initial guess is infeasible.
Could you point out how to possibly write a set of thousands of constraints using matrices and get them ready for scipy.optimize.minimize? In real world, the number of variables can be easily thousands. I'm dealing with a portfolio construction problem and I'm faced with thousands of variables. Thank you!
Scipy.optimize.minimize is efficient for problems up to a hundred variables. If you have more then you probably want to try optimization software that is built for large-scale problems. Here are arrays in Python Gekko: stackoverflow.com/questions/52944970/how-to-use-arrays-in-gekko-optimizer-for-python The gradient based optimizers can solve problems with 1,000,000+ variables.
@@apm Thank you so much! I just found out that your website actually has a whole section/department devoted to Gekko Python, which can deal with large scale non-linear optimization. I'll look into it and come back with some feedback.
Sir im getting a few errors how can i contact?
You can put your questions here in the comments. Unfortunately, I can't help with individual, private requests for support.
I got lost around 2:00 and now I'm mot sure where or even who I am anymore.
Yup, I can see how that is confusing. I hope you can kick your amnesia.
Sir, I wrote some comments here in response to the recent materials you recommended me to read. but, seem omitted? Am I right?
I think you repeated your same question 3 times in the comments. CZcams may have deleted them if it thought the repeated comments were someone spamming the discussion thread.
@@apm okay, my case is different. I will repost it. I only mention the term that Prof. and maybe private issues if not possible @YT, I don't know. But, it is not an uncommon practice to express an appreciation for an individual.
Thanks! But what about maximum? Can you explain?
Any maximization problem can be transformed into a minimization problem by multiplying the objective function by negative one. Most solvers require a minimized objective function.
APMonitor.com thanks for help, I understand it yet )
Good !!
File "main.py", line 22
sol=minimize(objective,x0,method='SLSQP',\ bounds=bnds,constraints=cons)
^
SyntaxError: unexpected character after line continuation character , now am getting this in last line, help me to get out of this
Just get rid of the continuation character "\" if you don't put the remaining part on the next line.
My constraint has to be >= 0 type?
It can be any type of inequality but you can convert to g(x)>0 by subtracting the right hand side or multiplying by -1 to flip the sign. Python Gekko can accept any type of inequality. See methods 2 and 3: apmonitor.com/che263/index.php/Main/PythonOptimization
Hi, does anyone know how the SLSQP work? What is the math behind it? How does it pick one solution out of an infinite number of solutions? I'd really appreciate it. I couldn't find an explanation of the method in google. Thanks
Here is help on quasi-Newton methods: apmonitor.com/me575/index.php/Main/QuasiNewton This doesn't cover the SLSQP (Sequential Least SQuares Programming) method but it should give you an idea of the types of how local approximations and search directions are calculated.
@@apm Thank you, professor! Do you think that the SLSQP has the same approach as in the Quasi-Newton methods? In other words, would the SLSQP and the Quasi-Newton methods give similar solutions?
@@houdalmayahi3538 yes, they should all give the same solution if they satisfy the Karush-Kuhn-Tucker conditions (converge to a local solution): apmonitor.com/me575/index.php/Main/KuhnTucker
@@apm Thank you so much!
if x[i] is a large array x[100] for example, do we need to write down all the numbers in the array for the initial guess x0???
in the video is x0=[1,5,5,1], if the array is very large, what to do with the initial guess?
You could just set the initial guess to zero:
x0 = np.zeros(0)
or any other number such as (2):x0 = np.ones(4) * 2.0
I recommend putting the initial guess internal to the constraints. You can initialize to any value, especially if you don't have a good idea of the initial guess.
Thank you, professor. But does the value of initial value have an influence on the optimal result we search for? if not what's the point of setting an initial value?
For convex optimization if there is no problem setting any initial guess because a local Optimizer should find the solution. However some problems have multiple local Minima and the initial guess is very important. Also for some problems that are convex the non-linearity is a problem for the solver and it may not find a solution without a good guess.
APMonitor.com Thank you very much!!!Really help a lot!
Hello!
I studied your program a lot in Python and a question arose. If by chance, sum_sq = 400.0 and not 40, constraint2 is not satisfied because constraint2 ~ 300.024 and
should be 0.
Why does the program continue to provide a solution for x1 up to x4 if constraint 2 is not met?
Please, I would like your help to interpret this.
Thanks, Luciana from Brazil.
You need to check the solution status. The solver may have reported that it couldn't find a solution. You can see this by printing the result.success Boolean flag.
res = minimize( )
print(res.success)
The optimization result is represented as a OptimizeResult object. Important attributes are: "x" the solution array, "success" a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination.
@@apm first, thanks by your so fast answer.
I followed your directions and used the function you sent me. However, even placing the value of sum_sq = 400.0 in constraint2, res.success = True.
I didn't understand why the answer is True if it should be False.
I wrote the following code:
res = minimize(objective, x0, method='SLSQP', tol=1e-6,constraints=cons)
print(res.success)
I'm sorry to have to question you again, but I really need your help.
Greetings from Brazil.
Luciana
@@pythonscienceanddatascienc4351 Try this (if gives False):
import numpy as np
from scipy.optimize import minimize
def objective(x):
return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
def constraint1(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def constraint2(x):
sum_eq = 400.0
for i in range(4):
sum_eq = sum_eq - x[i]**2
return sum_eq
# initial guesses
n = 4
x0 = np.zeros(n)
x0[0] = 1.0
x0[1] = 5.0
x0[2] = 5.0
x0[3] = 1.0
# show initial objective
print('Initial SSE Objective: ' + str(objective(x0)))
# optimize
b = (1.0,5.0)
bnds = (b, b, b, b)
con1 = {'type': 'ineq', 'fun': constraint1}
con2 = {'type': 'eq', 'fun': constraint2}
cons = ([con1,con2])
solution = minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
x = solution.x
print(solution.success)
# show final objective
print('Final SSE Objective: ' + str(objective(x)))
# print solution
print('Solution')
print('x1 = ' + str(x[0]))
print('x2 = ' + str(x[1]))
print('x3 = ' + str(x[2]))
print('x4 = ' + str(x[3]))
@@apm
I tested it now and it worked. Thank you, so much!!!
However, I realized that I did not enter the boundary conditions in the minimize command. I forgot.
Just a question of interpretation: why is it True if I put the bondary conditions and False if I don't (only with the restrictions, would it already give False in both cases)?
Thanks again.
@@pythonscienceanddatascienc4351 your solution may be unbounded (variables go to infinity) if you don't include the boundary conditions.
Question:
x1*x2*x3*x4 >= 25 why we take 25 to the left not all the x to the right?
x1^2+x2^2+x3^2+x4^2 = 40 why we take all the x to the right not 40 to the left?
It just needs to be in the form f(x)>0 for inequality constraints and f(x)=0 for equality constraints. You could go to either side for the equality constraint. For the inequality constraint, you could also go to either side but then just multiply by -1 to reverse the sign to get f(x)>0.