algorithms.optimize

Module: algorithms.optimize

nipy.algorithms.optimize.fmin_steepest(f, x0, fprime=None, xtol=0.0001, ftol=0.0001, maxiter=None, epsilon=1.4901161193847656e-08, callback=None, disp=True)

Minimize a function using a steepest gradient descent algorithm. This complements the collection of minimization routines provided in scipy.optimize. Steepest gradient iterations are cheaper than in the conjugate gradient or Newton methods, hence convergence may sometimes turn out faster algthough more iterations are typically needed.

Parameters:
fcallable

Function to be minimized

x0array

Starting point

fprimecallable

Function that computes the gradient of f

xtolfloat

Relative tolerance on step sizes in line searches

ftolfloat

Relative tolerance on function variations

maxiterint

Maximum number of iterations

epsilonfloat or ndarray

If fprime is approximated, use this value for the step

size (can be scalar or vector).
callbackcallable

Optional function called after each iteration is complete

dispbool

Print convergence message if True

Returns:
xarray

Gradient descent fix point, local minimizer of f