lr_finder
birdnet_stm32.training.lr_finder
¶
Learning rate finder — exponential sweep to find optimal LR range.
Implements the LR range test: increases LR exponentially from a small value to a large value over a fixed number of steps, recording loss at each step. The optimal LR is typically in the steepest descent region of the loss curve.
LRFinder
¶
Bases: Callback
Exponential LR sweep callback.
Increases the learning rate from min_lr to max_lr over
num_steps mini-batches and records the loss at each step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_lr
|
float
|
Starting learning rate. |
1e-07
|
max_lr
|
float
|
Ending learning rate. |
1.0
|
num_steps
|
int
|
Number of training steps in the sweep. |
200
|
Source code in birdnet_stm32/training/lr_finder.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | |
on_train_batch_begin(batch, logs=None)
¶
Set LR for the current step.
Source code in birdnet_stm32/training/lr_finder.py
on_train_batch_end(batch, logs=None)
¶
Record LR and loss.
Source code in birdnet_stm32/training/lr_finder.py
suggest_lr(smoothing=5)
¶
Find the LR with steepest loss descent (smoothed gradient).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
smoothing
|
int
|
Window size for moving-average smoothing. |
5
|
Returns:
| Type | Description |
|---|---|
float
|
Suggested learning rate. |
Source code in birdnet_stm32/training/lr_finder.py
plot(path=None, suggested_lr=None)
¶
Plot LR vs loss curve.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | None
|
If given, saves to this file (PNG). Otherwise calls plt.show(). |
None
|
suggested_lr
|
float | None
|
If given, draws a vertical line at the suggested LR. |
None
|
Source code in birdnet_stm32/training/lr_finder.py
run_lr_finder(model, dataset, min_lr=1e-07, max_lr=1.0, num_steps=200, loss_fn='categorical_crossentropy', plot_path=None)
¶
Run an LR range test and return the suggested learning rate.
The model weights are restored after the sweep — no permanent changes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Model
|
Compiled or uncompiled Keras model. |
required |
dataset
|
Dataset
|
Training dataset (infinite, batched). |
required |
min_lr
|
float
|
Starting learning rate. |
1e-07
|
max_lr
|
float
|
Ending learning rate. |
1.0
|
num_steps
|
int
|
Number of sweep steps. |
200
|
loss_fn
|
str | Loss
|
Loss function. |
'categorical_crossentropy'
|
plot_path
|
str | None
|
Optional path to save the LR-vs-loss plot. |
None
|
Returns:
| Type | Description |
|---|---|
float
|
Suggested optimal learning rate. |