You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/training/optimisers.md
+8-75
Original file line number
Diff line number
Diff line change
@@ -4,53 +4,24 @@ CurrentModule = Flux
4
4
5
5
# [Optimisers](@id man-optimisers)
6
6
7
-
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
7
+
Flux builds in many optimisation rules for use with [`train!`](@ref Flux.Optimise.train!) and
8
+
other training functions.
8
9
9
-
```julia
10
-
using Flux
11
-
12
-
W =rand(2, 5)
13
-
b =rand(2)
14
-
15
-
predict(x) = (W * x) .+ b
16
-
loss(x, y) =sum((predict(x) .- y).^2)
10
+
The mechanism by which these work is gradually being replaced as part of the change
11
+
from "implicit" dictionary-based to "explicit" tree-like structures.
12
+
At present, the same struct (such as `Adam`) can be used with either form,
13
+
and will be automatically translated.
17
14
18
-
x, y =rand(5), rand(2) # Dummy data
19
-
l =loss(x, y) # ~ 3
20
-
21
-
θ = Flux.params(W, b)
22
-
grads =gradient(() ->loss(x, y), θ)
23
-
```
15
+
For full details of how the new "explicit" interface works, see the [Optimisers.jl documentation](https://fluxml.ai/Optimisers.jl/dev/).
24
16
25
-
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
26
-
27
-
```julia
28
-
η =0.1# Learning Rate
29
-
for p in (W, b)
30
-
p .-= η * grads[p]
31
-
end
32
-
```
17
+
For full details on how the "implicit" interface worked, see the [Flux 0.13.6 manual](https://fluxml.ai/Flux.jl/v0.13.6/training/optimisers/#Optimiser-Interface).
33
18
34
-
Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.
35
-
36
-
```julia
37
-
using Flux: update!
38
-
39
-
opt =Descent(0.1) # Gradient descent with learning rate 0.1
40
-
41
-
for p in (W, b)
42
-
update!(opt, p, grads[p])
43
-
end
44
-
```
45
-
46
-
An optimiser `update!` accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass `opt` to our [training loop](training.md), which will update all parameters of the model in a loop. However, we can now easily replace `Descent` with a more advanced optimiser such as `Adam`.
47
19
48
20
## Optimiser Reference
49
21
50
22
All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.
51
23
52
24
```@docs
53
-
Flux.Optimise.update!
54
25
Descent
55
26
Momentum
56
27
Nesterov
@@ -67,44 +38,6 @@ OAdam
67
38
AdaBelief
68
39
```
69
40
70
-
## Optimiser Interface
71
-
72
-
Flux's optimisers are built around a `struct` that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the `apply!` function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.
73
-
74
-
In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work on this with a simple example.
The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.
87
-
88
-
```julia
89
-
function Flux.Optimise.apply!(o::Momentum, x, Δ)
90
-
η, ρ = o.eta, o.rho
91
-
v =get!(o.velocity, x, zero(x))::typeof(x)
92
-
@. v = ρ * v - η * Δ
93
-
@. Δ =-v
94
-
end
95
-
```
96
-
97
-
This is the basic definition of a Momentum update rule given by:
98
-
99
-
```math
100
-
v = ρ * v - η * Δ
101
-
w = w - v
102
-
```
103
-
104
-
The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser.
105
-
106
-
Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully.
107
-
108
41
## Composing Optimisers
109
42
110
43
Flux defines a special kind of optimiser simply called `Optimiser` which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient
0 commit comments