Optimizers

Gradient descent

class pykitml.GradientDescent(learning_rate, decay_rate=1)

This class implements gradient descent optimization.

__init__(learning_rate, decay_rate=1)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate

Momentum

class pykitml.Momentum(learning_rate, decay_rate=1, beta=0.9)

This class implements momentum optimization.

__init__(learning_rate, decay_rate=1, beta=0.9)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate
  • beta (float) – Should be between 0 to 1.

Nesterov momentum

class pykitml.Nesterov(learning_rate, decay_rate=1, beta=0.9)

This class implements nesterov momentum optimization.

__init__(learning_rate, decay_rate=1, beta=0.9)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate
  • beta (float) – Should be between 0 to 1.

Adagrad

class pykitml.Adagrad(learning_rate, decay_rate=1)

This class implements adagrad optimization.

__init__(learning_rate, decay_rate=1)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate

RMSprop

class pykitml.RMSprop(learning_rate, decay_rate=1, beta=0.9)

This class implements RMSprop optimization.

__init__(learning_rate, decay_rate=1, beta=0.9)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate
  • beta (float) – Should be between 0 to 1.

Adam

class pykitml.Adam(learning_rate, decay_rate=1, beta1=0.9, beta2=0.9)

This class implements adam optimization.

__init__(learning_rate, decay_rate=1, beta1=0.9, beta2=0.9)
Parameters:
  • learning_rate (float) –
  • decay_rate (float) – Decay rate for learning rate
  • beta1 (float) – Should be between 0 to 1.
  • beta2 (float) – Should be between 0 to 1.