オプティマイザー(を使用)と、シングルチャネルオーディオソース分離タスクでauto-encoder
ネットワークをトレーニングしています。学習率を1要素ずつ減衰させると、ネットワーク損失は急激に跳ね上がり、次の学習率の減衰まで減少します。Adam
amsgrad=True
MSE loss
ネットワークの実装とトレーニングにPytorchを使用しています。
Following are my experimental setups:
Setup-1: NO learning rate decay, and
Using the same Adam optimizer for all epochs
Setup-2: NO learning rate decay, and
Creating a new Adam optimizer with same initial values every epoch
Setup-3: 0.25 decay in learning rate every 25 epochs, and
Creating a new Adam optimizer every epoch
Setup-4: 0.25 decay in learning rate every 25 epochs, and
NOT creating a new Adam optimizer every time rather
using PyTorch's "multiStepLR" and "ExponentialLR" decay scheduler
every 25 epochs
セットアップ#2、#3、#4で非常に驚くべき結果が得られており、その理由を説明することができません。以下は私の結果です:
Setup-1 Results:
Here I'm NOT decaying the learning rate and
I'm using the same Adam optimizer. So my results are as expected.
My loss decreases with more epochs.
Below is the loss plot this setup.
プロット-1:
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
for epoch in range(num_epochs):
running_loss = 0.0
for i in range(num_train):
train_input_tensor = ..........
train_label_tensor = ..........
optimizer.zero_grad()
pred_label_tensor = model(train_input_tensor)
loss = criterion(pred_label_tensor, train_label_tensor)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_history[m_lr].append(running_loss/num_train)
Setup-2 Results:
Here I'm NOT decaying the learning rate but every epoch I'm creating a new
Adam optimizer with the same initial parameters.
Here also results show similar behavior as Setup-1.
Because at every epoch a new Adam optimizer is created, so the calculated gradients
for each parameter should be lost, but it seems that this doesnot affect the
network learning. Can anyone please help on this?
プロット-2:
for epoch in range(num_epochs):
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
running_loss = 0.0
for i in range(num_train):
train_input_tensor = ..........
train_label_tensor = ..........
optimizer.zero_grad()
pred_label_tensor = model(train_input_tensor)
loss = criterion(pred_label_tensor, train_label_tensor)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_history[m_lr].append(running_loss/num_train)
Setup-3 Results:
As can be seen from the results in below plot,
my loss jumps every time I decay the learning rate. This is a weird behavior.
If it was happening due to the fact that I'm creating a new Adam
optimizer every epoch then, it should have happened in Setup #1, #2 as well.
And if it is happening due to the creation of a new Adam optimizer with a new
learning rate (alpha) every 25 epochs, then the results of Setup #4 below also
denies such correlation.
プロット-3:
decay_rate = 0.25
for epoch in range(num_epochs):
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
if epoch % 25 == 0 and epoch != 0:
lr *= decay_rate # decay the learning rate
running_loss = 0.0
for i in range(num_train):
train_input_tensor = ..........
train_label_tensor = ..........
optimizer.zero_grad()
pred_label_tensor = model(train_input_tensor)
loss = criterion(pred_label_tensor, train_label_tensor)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_history[m_lr].append(running_loss/num_train)
Setup-4 Results:
In this setup, I'm using Pytorch's learning-rate-decay scheduler (multiStepLR)
which decays the learning rate every 25 epochs by 0.25.
Here also, the loss jumps everytime the learning rate is decayed.
以下のコメントで@Dennisが示唆しているように、私はReLU
と1e-02 leakyReLU
非線形性の両方を試しました。しかし、結果は同じように振る舞い、損失は最初に減少し、次に増加し、その後、学習率の減衰なしで達成するよりも高い値で飽和します。
プロット4は結果を示しています。
プロット-4:
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[25,50,75], gamma=0.25)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)
scheduler = ......... # defined above
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
for epoch in range(num_epochs):
scheduler.step()
running_loss = 0.0
for i in range(num_train):
train_input_tensor = ..........
train_label_tensor = ..........
optimizer.zero_grad()
pred_label_tensor = model(train_input_tensor)
loss = criterion(pred_label_tensor, train_label_tensor)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_history[m_lr].append(running_loss/num_train)
編集:
- 以下のコメントと返信で提案されているように、コードに変更を加え、モデルをトレーニングしました。同じコードとプロットを追加しました。
- 私はさまざまな
lr_scheduler
inで試してみPyTorch (multiStepLR, ExponentialLR)
ましSetup-4
たが、@ Dennisのコメントで提案されているように、同じプロットが以下にリストされています。 - コメントで@Dennisが提案したとおり、leakyReLUを試してみます。
助けて。ありがとう