Optimisers

synopsis

Optimisers to use with NPU API for training

npu.optim.Adam(lr=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)

Adam Optimiser.

Parameters
  • lr (float) – Learning rate

  • beta1 (float) – Beta 1

  • beta2 (float) – Beta 2

  • epsilon (float) – Epsilon

npu.optim.RMS(lr=0.001, decay=1e-06)

RMS Optimiser.

Parameters
  • lr (float) – Learning rate

  • decay (float) – Decay

npu.optim.SGD(lr=0.001, momentum=0.9)

SGD Optimiser.

Parameters
  • lr (float) – Learning rate

  • momentum (float) – Momentum