To use gradient descent to solve a non-linear system of equations in MATLAB, we first need to define the objective function that we want to minimize using gradient descent. In this case, the objective function will be the sum of squares of the residuals of the non-linear system of equations.
main.m147 chars7 lines
Here, x
is the vector of unknowns that we want to solve for. In this example, we are solving for x1
and x2
in the system of equations:
main.m32 chars3 lines
The f
output of the obj
function is a vector of the residuals of the system of equations, and the grad
output is the matrix of partial derivatives of f
with respect to x
.
We can then use the gradientDescent
function in MATLAB to find the values of x
that minimize our objective function.
main.m209 chars7 lines
Here, x0
is our initial guess for the values of x
, lr
is the learning rate for the gradient descent algorithm, maxIter
is the maximum number of iterations for the algorithm, and tol
is the tolerance for convergence.
The gradientDescent
function returns the optimal values of x
in xopt
, the optimal value of the objective function in fopt
, and the number of iterations it took to converge in niter
.
gistlibby LogSnag