use gradient descent to solve non-linear system of equations in matlab

To use gradient descent to solve a non-linear system of equations in MATLAB, we first need to define the objective function that we want to minimize using gradient descent. In this case, the objective function will be the sum of squares of the residuals of the non-linear system of equations.

main.m
function [f,grad] = obj(x)
    f = [x(1)^2 + x(2)^2 - 1;
         x(1)^2 - x(2)^2];
    grad = [2*x(1), 2*x(2); 
            2*x(1), -2*x(2)];
end
147 chars
7 lines

Here, x is the vector of unknowns that we want to solve for. In this example, we are solving for x1 and x2 in the system of equations:

main.m
x1^2 + x2^2 = 1
x1^2 - x2^2 = 0
32 chars
3 lines

The f output of the obj function is a vector of the residuals of the system of equations, and the grad output is the matrix of partial derivatives of f with respect to x.

We can then use the gradientDescent function in MATLAB to find the values of x that minimize our objective function.

main.m
x0 = [1;1]; % initial guess
lr = 0.1; % learning rate
maxIter = 1000; % maximum number of iterations
tol = 1e-6; % tolerance for convergence

[xopt, fopt, niter] = gradientDescent(@obj, x0, lr, maxIter, tol);
209 chars
7 lines

Here, x0 is our initial guess for the values of x, lr is the learning rate for the gradient descent algorithm, maxIter is the maximum number of iterations for the algorithm, and tol is the tolerance for convergence.

The gradientDescent function returns the optimal values of x in xopt, the optimal value of the objective function in fopt, and the number of iterations it took to converge in niter.

gistlibby LogSnag