2023年11月29日发(作者:)

pytorchAdam优化器源码解读

1. 调⽤⽅法

torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

参数:

weight_decay : 这⾥是采⽤权重衰减,权重衰减的系数

amsgrad:在更新时,是否保留梯度的⼆阶历史信息

2.源码

源码中的实现,参照最后⼀幅图中L2正则化的Adam。

def step(self, closure=None):

"""Performs a single optimization step.

Arguments:

closure (callable, optional): A closure that reevaluates the model

and returns the loss.

"""

loss = None

if closure is not None:

loss = closure()

for group in self.param_groups:

for p in group['params']:

if p.grad is None:

continue

grad = p.grad.data

if grad.is_sparse:

raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')

amsgrad = group['amsgrad']

state = self.state[p] # step

之前的累计数据

# State initialization

if len(state) == 0:

state['step'] = 0

# Exponential moving average of gradient values

state['exp_avg'] = torch.zeros_like(p.data) # [batch, seq]

# Exponential moving average of squared gradient values

state['exp_avg_sq'] = torch.zeros_like(p.data)

if amsgrad:

# Maintains max of all exp. moving avg. of sq. grad. values

state['max_exp_avg_sq'] = torch.zeros_like(p.data)

exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] # rs

上次的

if amsgrad:

# asmgradAdam

优化⽅法是针对的改进,通过添加额外的约束,使学习率始终为正值。

max_exp_avg_sq = state['max_exp_avg_sq']

beta1, beta2 = group['betas']

state['step'] += 1

bias_correction1 = 1 - beta1 ** state['step']

bias_correction2 = 1 - beta2 ** state['step']

#

序号对应最后⼀幅图中序号

if group['weight_decay'] != 0: # (L2

进⾏权重衰减实际是正则化)

# 6. grad(t)=grad(t-1)+ weight*p(t-1)

grad.add_(group['weight_decay'], p.data)

# Decay the first and second moment running average coefficient

# 7.m(t): m(t)=beta_1*m(t-1)+(1-beta_1)*grad

计算

exp_avg.mul_(beta1).add_(1 - beta1, grad)

# 8.v(t): v(t)= beta_2*v(t-1)+(1-beta_2)*grad^2

计算

# 8.v(t): v(t)= beta_2*v(t-1)+(1-beta_2)*grad^2

计算

exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)

if amsgrad:

# Maintains the maximum of all 2nd moment running avg. till now

# max_exp_avg_sq

迭代改变的值(取最⼤值),传到下⼀次,保留之前的梯度信息。

torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)

# Use the max. for normalizing running avg. of gradient

denom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])

else:

# sqrt(v(t))+epsilon

计算

# sqrt(v(t))+eps = denom = sqrt(v(t))/sqrt(1-beta_2^t)+eps

denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])

# step_size=lr/bias_correction1=lr/(1-beta_1^t)

step_size = group['lr'] / bias_correction1

#p(t)=p(t-1)-step_size*m(t)/denom

p.data.addcdiv_(-step_size, exp_avg, denom)

return loss

对最后⼀步更新

1−β

2

t

v

tϵ

denom==

1β

t

2

+ϵ

1+

t

1−β

2

v^

t

ϵ

=

1+

1−β

2

t

p(t)=p(t1)step_sizem(t)/denom

lr

1

t

=p(t1)

1β

2

m(t)

denom

ϵ

^m

t

lr

t

1β

2

v^

t

=p(t1)(1+)

上式取,即可与最后⼀幅图中序号12等价

α=(1+)

算法:

ϵ

t

1−β

2

(《深度学习》书中,pytorch中Adam不采⽤下⾯⽅式)

3. adam中权重衰减与L2正则化的关系

在sgd中,权重衰减和L2正则化等价,在adam等⾃适应优化算法(AdaGrad/RMSProp等)中,不等价。

在pytorch中的adam中,实际使⽤的是L2正则化(下图中使⽤红⾊部分),adamw算法中使⽤weight_decay(下图中暗黄⾊部分),两者

的区别在于使⽤位置不同,其他部分都相同。