SGD / Adam Optimizer
OptimizationMinimize a loss function by iteratively updating parameters using gradients.
Update the weight by subtracting the learning rate times the gradient of the loss.
Python
Shift+Enter to run
1
2
3
4
5
6
7
8
9
10
11
Step 0 / 0
Speed
Write Python code and press Shift+Enter or click "Run" to visualize.
Run the operation to see step-by-step explanation.