code for the gradient descent algorithm in python

Below is a sample code for implementing the gradient descent algorithm in Python:

main.py
import numpy as np

def gradient_descent(X, y, theta, alpha, iterations):
    m = len(y)
    for i in range(iterations):
        predictions = np.dot(X, theta)
        errors = predictions - y
        gradient = (1/m) * np.dot(X.T, errors)
        theta = theta - alpha * gradient
    return theta
298 chars
11 lines

Where:

  • X is the feature matrix
  • y is the target vector
  • theta is the vector of coefficients
  • alpha is the learning rate
  • iterations is the number of iterations

You can call this function by passing in the relevant arguments:

main.py
theta = gradient_descent(X, y, theta, alpha, iterations)
57 chars
2 lines

where X is the feature matrix of shape (m, n), y is the target vector of shape (m, 1), theta is the vector of coefficients of shape (n, 1), alpha is the learning rate, and iterations is the number of iterations.

Note: This is a basic implementation of the gradient descent algorithm. There are many variations and extensions of this algorithm that may be more suitable for different use cases.

gistlibby LogSnag