从零实现深度学习框架——几种常用的权重初始化方法

从零实现深度学习框架——几种常用的权重初始化方法,第1张

引言

本着“凡我不能创造的,我就不能理解”的思想,本系列文章会基于纯Python以及NumPy从零创建自己的深度学习框架,该框架类似PyTorch能实现自动求导。

要深入理解深度学习,从零开始创建的经验非常重要,从自己可以理解的角度出发,尽量不使用外部完备的框架前提下,实现我们想要的模型。本系列文章的宗旨就是通过这样的过程,让大家切实掌握深度学习底层实现,而不是仅做一个调包侠。

在深度学习中,权重的初始值非常重要,权重初始化方法甚至关系到模型能否收敛。本文主要介绍两种权重初始化方法。

为什么需要随机初始值

我们知道,神经网络一般在初始化权重时都是采用随机值。如果不用随机值,全部设成一样的值会发生什么呢?

极端情况,假设全部设成0。显然,如果某层的权重全部初始化为0,类似该层的神经元全部被丢弃(dropout)了,就不会有信息传播到下一层。

如果全部设成同样的非零值,那么在反向传播中,所有的权重都会进行相同的更新,权重被更新为相同的值,并拥有了对称(重复)的值。不管怎样进行迭代(sgd),都不会打破这种对称性,隐藏层好像只有一个神经元,我们无法实现神经网络的表达能力。只有我们前面介绍的Dropout可以打破这种对称性。

为了打破权重的对称结构,必须随机生成初始值。

隐藏层激活值的分布

观察隐藏层激活值的分布,可以获得一些启发。

这里通过一个实验来看权重初始值是如何影响隐藏层的激活值分布的。

向一个5 层神经网络(激活函数使用sigmoid 函数)传入随机生成的输入数据,用直方图绘制各层激活值的数据分布。

# coding: utf-8
import numpy as np
import matplotlib.pyplot as plt


def sigmoid(x):
    return 1 / (1 + np.exp(-x))
    
def ReLU(x):
    return np.maximum(0, x)
    
def tanh(x):
    return np.tanh(x)
    
input_data = np.random.randn(1000, 100)  # 1000个数据
node_num = 100  # 各隐藏层的节点(神经元)数
hidden_layer_size = 5  # 隐藏层有5层
activations = {}  # 激活值的结果保存在这里

x = input_data

for i in range(hidden_layer_size):
    if i != 0:
        x = activations[i-1]

    # 改变初始值进行实验!
    w = np.random.randn(node_num, node_num) * 1
    # w = np.random.randn(node_num, node_num) * 0.01
    # w = np.random.randn(node_num, node_num) * np.sqrt(1.0 / node_num)
    # w = np.random.randn(node_num, node_num) * np.sqrt(2.0 / node_num)

    a = np.dot(x, w)
    # 将激活函数的种类也改变,来进行实验!
    z = sigmoid(a)
    # z = ReLU(a)
    # z = tanh(a)

    activations[i] = z

# 绘制直方图
for i, a in activations.items():
    plt.subplot(1, len(activations), i+1)
    plt.title(str(i+1) + "-layer")
    if i != 0: plt.yticks([], [])
    plt.hist(a.flatten(), 30, range=(0,1))
plt.show()

这里假设神经网络有5 层,每层有100 个单元。然后,用高斯分布随机生成1000 个数据作为输入数据,并把它们传给5 层神经网络。这里权重的初始化也通过均值为0方差为1的高斯分布。

各层的激活值呈偏向0 和1 的分布。这里使用的sigmoid函数是S型函数,随着输出不断地靠近0(或者靠近1,在S线的两端),它的梯度逐渐接近0。因此,偏向于0或1的数据分布会造成反向传播中梯度的值不断变小,最后消失。

我们知道,在L2正则化话会使权重参数变小,那么我们这里在初始参数的时候,直接设定一个较小的值会不会好一点。我们只要改下上面代码的27/28行。

# w = np.random.randn(node_num, node_num) * 1
w = np.random.randn(node_num, node_num) * 0.01

这次虽然没有偏向0和1,不会发生梯度消失的问题。但是激活值的分布有所偏向,这里集中于0.5附近。这样模型的表现力会大打折扣。

下面我们来了解比较常用的Xavier初始值和He初始值,看它们会对激活值的分布产生什么影响。

Xavier初始化

Xavier初始化的思想很简单,即尽可能保持所有层之间输入输出的方差一致。

结论是在初始化时从正态分布 N ( 0 , 2 n i n + n o u t ) \mathcal{N}\left(0,\frac{2}{n_{in} + n_{out}}\right) N(0,nin+nout2)中随机采样来构成初始权重,其中 n i n n_{in} nin n o u t n_{out} nout分别代表输入和输出的维度。

直接给出最终结果很简单,但是它是如何推导出来的呢?

下面就来推导看看,我们来看第 l l l层的公式,假设激活函数是 tanh ⁡ \tanh tanh
z [ l ] = W [ l ] a [ l − 1 ] + b [ l ] a [ l ] = tanh ( z [ l ] ) (1) \begin{aligned} z^{[l]} &= W^{[l]} a^{[l-1]} + b^{[l]} \ a^{[l]} &= \text{tanh} (z^{[l]}) \end{aligned} \tag 1 z[l]a[l]=W[l]a[l1]+b[l]=tanh(z[l])(1)
为了简化,我们做出以下假设:

  • 所有的权重都是独立同分布的
  • 输入也是独立同分布的
  • 权重和输入是相互独立的,且它们的均值都为零

同时,我们让输入归一化,并且让初始值落在激活函数的线性部分,对于较小的值有 tanh ⁡ ( z [ l ] ) ≈ z [ l ] \tanh(z^{[l]}) \approx z^{[l]} tanh(z[l])z[l],意味着:
Var ( a [ l ] ) = Var ( z [ l ] ) (2) \text{Var}(a^{[l]}) = \text{Var}(z^{[l]}) \tag 2 Var(a[l])=Var(z[l])(2)
z [ l ] z^{[l]} z[l]代表第 l l l层计算的logits,这里是一个向量 z [ l ] = W [ l ] a [ l − 1 ] + b [ l ] = vector ( z 1 [ l ] , z 2 [ l ] , ⋯   , z n [ l ] [ l ] ) z^{[l]} = W^{[l]} a^{[l-1]} + b^{[l]} = \text{vector}(z_1^{[l]},z_2^{[l]},\cdots,z_{n^{[l]}}^{[l]}) z[l]=W[l]a[l1]+b[l]=vector(z1[l],z2[l],,zn[l][l])

n l n^l nl表示第 l l l层的输出维数(隐藏单元个数),对于其中第 k k k个隐藏单元,有 z k [ l ] = ∑ j = 1 n l − 1 w k j [ l ] a j [ l − 1 ] + b k [ l ] z^{[l]}_k = \sum_{j=1}^{n^{l-1}} w^{[l]}_{kj} a_j^{[l-1]} + b_k^{[l]} zk[l]=j=1nl1wkj[l]aj[l1]+bk[l]

为了简化,我们令偏置为零,即 b [ l ] = 0 b^{[l]}=0 b[l]=0

我们展开公式 ( 2 ) (2) (2),查看第 k k k个元素:
Var ( a k [ l ] ) = Var ( z k [ l ] ) = Var ( ∑ j = 1 n [ l − 1 ] w k j [ l ] a j [ l − 1 ] ) = ∑ j = 1 n [ l − 1 ] Var ( w k j [ l ] a j [ l − 1 ] ) (3) \text{Var}(a^{[l]}_k) = \text{Var}(z^{[l]}_k) = \text{Var}(\sum_{j=1}^{n^{[l-1]}} w^{[l]}_{kj} a_j^{[l-1]}) = \sum_{j=1}^{n^{[l-1]}} \text{Var}(w^{[l]}_{kj} a_j^{[l-1]}) \tag 3 Var(ak[l])=Var(zk[l])=Var(j=1n[l1]wkj[l]aj[l1])=j=1n[l1]Var(wkj[l]aj[l1])(3)

对于两独立随机变量有:
Var ( X Y ) = E [ X ] 2 Var ( Y ) + Var ( X ) E [ Y ] 2 + Var ( X ) Var ( Y ) (4) \text{Var}(XY) = E[X]^2\text{Var}(Y) + \text{Var}(X)E[Y]^2 + \text{Var}(X)\text{Var}(Y) \tag 4 Var(XY)=E[X]2Var(Y)+Var(X)E[Y]2+Var(X)Var(Y)(4)
如果同时 X , Y X,Y X,Y的均值为零,有:
Var ( X Y ) = Var ( X ) Var ( Y ) (5) \text{Var}(XY) = \text{Var}(X)\text{Var}(Y) \tag 5 Var(XY)=Var(X)Var(Y)(5)
基于以上条件,那么:
Var ( z k [ l ] ) = ∑ j = 1 n [ l − 1 ] Var ( w k j [ l ] a j [ l − 1 ] ) = ∑ j = 1 n [ l − 1 ] Var ( W [ l ] ) Var ( a [ l − 1 ] ) = n [ l − 1 ] Var ( W [ l ] ) Var ( a [ l − 1 ] ) (6) \begin{aligned} \text{Var}(z^{[l]}_k) &=\sum_{j=1}^{n^{[l-1]}} \text{Var}(w^{[l]}_{kj} a_j^{[l-1]}) \ &= \sum_{j=1}^{n^{[l-1]}} \text{Var}(W^{[l]}) \text{Var}(a^{[l-1]}) \ &= n^{[l-1]} \text{Var}(W^{[l]}) \text{Var}(a^{[l-1]}) \end{aligned} \tag 6 Var(zk[l])=j=1n[l1]Var(wkj[l]aj[l1])=j=1n[l1]Var(W[l])Var(a[l1])=n[l1]Var(W[l])Var(a[l1])(6)
其中,基于假设1有:
Var ( w k j [ l ] ) = Var ( w 11 [ l ] ) = Var ( w 12 [ l ] ) = ⋯ = Var ( W [ l ] ) (7) \text{Var}(w_{kj}^{[l]}) = \text{Var}(w_{11}^{[l]}) = \text{Var}(w_{12}^{[l]}) = \cdots = \text{Var}(W^{[l]}) \tag 7 Var(wkj[l])=Var(w11[l])=Var(w12[l])==Var(W[l])(7)
类似地,有:
Var ( a j [ l − 1 ] ) = Var ( a 1 [ l − 1 ] ) = Var ( a 2 [ l − 1 ] ) = ⋯ = Var ( a [ l − 1 ] ) (8) \text{Var}(a_{j}^{[l-1]}) = \text{Var}(a_{1}^{[l-1]}) = \text{Var}(a_{2}^{[l-1]}) = \cdots = \text{Var}(a^{[l-1]}) \tag 8 Var(aj[l1])=Var(a1[l1])=Var(a2[l1])==Var(a[l1])(8)

和:
Var ( z [ l ] ) = Var ( z k [ l ] ) (8) \text{Var}(z^{[l]}) = \text{Var}(z^{[l]}_k) \tag 8 Var(z[l])=Var(zk[l])(8)
基于以上,我们有:
Var ( a [ l ] ) = Var ( z [ l ] ) = n [ l − 1 ] Var ( W [ l ] ) Var ( a [ l − 1 ] ) (9) \text{Var}(a^{[l]}) = \text{Var}(z^{[l]})= n^{[l-1]} \text{Var}(W^{[l]}) \text{Var}(a^{[l-1]}) \tag 9 Var(a[l])=Var(z[l])=n[l1]Var(W[l])Var(a[l1])(9)
其中 a [ l − 1 ] a^{[l-1]} a[l1]可以看成是输入信号, a [ l ] a^{[l]} a[l]是输出信号。

也就是说,输入信号的方差经过该层后放大或缩小了 n [ l − 1 ] Var ( W [ l ] ) n^{[l-1]} \text{Var}(W^{[l]}) n[l1]Var(W[l])倍。为了使得在经过多层网络后,输入信号不被过分放大或过分减弱,我们尽可能保持每个神经元的输入和输出方差一致,这样,需要有 n [ l − 1 ] Var ( W [ l ] ) = 1 n^{[l-1]} \text{Var}(W^{[l]})=1 n[l1]Var(W[l])=1,即
Var ( W [ l ] ) = 1 n [ l − 1 ] (10) \text{Var}(W^{[l]}) = \frac{1}{n^{[l-1]}} \tag {10} Var(W[l])=n[l1]1(10)
如果我们考虑整个网络,并用 L L L代表输出层的话。那么输出层的方差与输入层方差的关系为:
Var ( a [ L ] ) = n [ L − 1 ] Var ( W [ L ] ) Var ( a [ L − 1 ] ) = n [ L − 1 ] Var ( W [ L ] ) n [ L − 2 ] Var ( W [ L − 1 ] ) Var ( a [ L − 2 ] ) = ⋯ = [ ∏ l = 1 L n [ l − 1 ] Var ( W [ l ] ) ] Var ( x ) (11) \begin{aligned} \text{Var}(a^{[L]}) &= n^{[L-1]} \text{Var}(W^{[L]}) \text{Var}(a^{[L-1]}) \ &= n^{[L-1]} \text{Var}(W^{[L]}) n^{[L-2]} \text{Var}(W^{[L-1]}) \text{Var}(a^{[L-2]}) \ \ &= \cdots \ &= \left[ \prod_{l=1}^{L} n^{[l-1]}\text{Var} (W^{[l]})\right] \text{Var}(x) \end{aligned} \tag {11} Var(a[L])=n[L1]Var(W[L])Var(a[L1])=n[L1]Var(W[L])n[L2]Var(W[L1])Var(a[L2])==[l=1Ln[l1]Var(W[l])]Var(x)(11)

从这可以看出,我们输出和输入的方差变化取决于:

n [ l − 1 ] Var ( W [ l ] ) = { < 1 ⇒ 梯度消失 = 1 ⇒ Var ( a [ L ] ) = Var(x) > 1 ⇒ 梯度爆炸 (12) n^{[l-1]}\text{Var} (W^{[l]}) = \begin{cases} < 1 & \Rightarrow \text{梯度消失} \ = 1 & \Rightarrow \text {$\text{Var}(a^{[L]}) = \text{Var(x)}$} \ > 1 & \Rightarrow \text{梯度爆炸} \end{cases} \tag {12} n[l1]Var(W[l])=<1=1>1梯度消失Var(a[L])=Var(x)梯度爆炸(12)

上面是正向传播过程,下面我们考虑反向传播过程。

网上大多数只有正向传播的证明,反向传播稍微复杂一点,但也不是无法证明的。

为了简化表示,我们引入一个记号:
δ [ l ] = ∂ C ∂ z [ l ] δ j [ l ] = ∂ C ∂ z j [ l ] (13) \delta^{[l]} = \frac{\partial C}{\partial z^{[l]}} \ \delta^{[l]}_j = \frac{\partial C}{\partial z^{[l]}_j} \tag {13} δ[l]=z[l]Cδj[l]=zj[l]C(13)
其中 C C C为损失。

先看第 l l l层:
Var ( δ [ l ] ) = Var ( ∂ C ∂ z [ l ] ) = Var ( ∂ C ∂ z j [ l ] ) = Var ( ∂ C ∂ z [ l + 1 ] ⋅ ∂ z [ l + 1 ] ∂ a j [ l ] ⋅ ∂ a j [ l ] ∂ z j [ l ] ) (14) \begin{aligned} \text{Var}(\delta^{[l]} ) = \text{Var}(\frac{\partial C}{\partial z^{[l]}}) &= \text{Var}(\frac{\partial C}{\partial z^{[l]}_j}) \ &= \text{Var}\left( \frac{\partial C}{\partial z^{[l+1]}} \cdot \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} \cdot \frac{\partial a^{[l]}_j}{\partial z^{[l]}_j} \right) \ \end{aligned} \tag {14} Var(δ[l])=Var(z[l]C)=Var(zj[l]C)=Var(z[l+1]Caj[l]z[l+1]zj[l]aj[l])(14)
这里需要展开来探讨, ∂ C ∂ z [ l + 1 ] \frac{\partial C}{\partial z^{[l+1]}} z[l+1]C表示到 z [ l + 1 ] z^{[l+1]} z[l+1]上的梯度,我们如前所述,后面用 δ [ l + 1 ] \delta^{[l+1]} δ[l+1]表示;

计算 ∂ z [ l + 1 ] ∂ a j [ l ] \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} aj[l]z[l+1]需要进行展开,不然不好理解;

∂ a j [ l ] ∂ z j [ l ] \frac{\partial a^{[l]}_j}{\partial z^{[l]}_j} zj[l]aj[l]我们假设取 tanh ⁡ \tanh tanh函数恒等部分,即
∂ a j [ l ] ∂ z j [ l ] = 1 (15) \frac{\partial a^{[l]}_j}{\partial z^{[l]}_j}=1 \tag{15} zj[l]aj[l]=1(15)

我们来对 ∂ z [ l + 1 ] ∂ a j [ l ] \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} aj[l]z[l+1]进行展开。

我们知道第 l + 1 l+1 l+1层有 n [ l + 1 ] n^{[l+1]} n[l+1]个神经元,如下简单神经网络示意图所示,第二层的第 j j j个神经元影响了下一层的所有神经元,在计算反向传播时,需要进行梯度累积。

我们展开 z [ l + 1 ] = W [ l + 1 ] a [ l ] + b [ l + 1 ] = vector ( z 1 [ l + 1 ] , z 2 [ l + 1 ] , ⋯   , z n [ l + 1 ] [ l + 1 ] ) z^{[l+1]} = W^{[l+1]} a^{[l]} + b^{[l+1]} = \text{vector}(z_1^{[l+1]},z_2^{[l+1]},\cdots,z_{n^{[l+1]}}^{[l+1]}) z[l+1]=W[l+1]a[l]+b[l+1]=vector(z1[l+1],z2[l+1],,zn[l+1][l+1])的计算表达式(同样不考虑偏置):
z 1 [ l + 1 ] = ∑ k = 1 n [ l ] w 1 k [ l + 1 ] a k [ l ] = w 11 [ l + 1 ] a 1 [ l ] + ⋯ + w 1 j [ l + 1 ] a j [ l ] ⋯ + w 1 n [ l ] [ l + 1 ] a n [ l ] [ l ] z 2 [ l + 1 ] = ∑ k = 1 n [ l ] w 2 k [ l + 1 ] a k [ l ] = w 21 [ l + 1 ] a 1 [ l ] + ⋯ + w 2 j [ l + 1 ] a j [ l ] ⋯ + w 2 n [ l ] [ l + 1 ] a n [ l ] [ l ] ⋮ z n [ l + 1 ] [ l + 1 ] = ∑ k = 1 n [ l ] w n [ l + 1 ] k [ l + 1 ] a k [ l ] = w n [ l + 1 ] 1 [ l + 1 ] a 1 [ l ] + ⋯ + w n [ l + 1 ] j [ l + 1 ] a j [ l ] ⋯ + w n [ l + 1 ] n [ l ] [ l + 1 ] a n [ l ] [ l ] (16) \begin{aligned} z^{[l+1]}_1 &= \sum_{k=1}^{n^{[l]}} w^{[l+1]}_{1k} a_k^{[l]} = w^{[l+1]}_{11} a_1^{[l]} + \cdots + w^{[l+1]}_{1j} a_j^{[l]} \cdots + w^{[l+1]}_{1n^{[l]}} a_{n^{[l]}}^{[l]} \ z^{[l+1]}_2 &= \sum_{k=1}^{n^{[l]}} w^{[l+1]}_{2k} a_k^{[l]} = w^{[l+1]}_{21} a_1^{[l]} + \cdots + w^{[l+1]}_{2j} a_j^{[l]} \cdots + w^{[l+1]}_{2n^{[l]}} a_{n^{[l]}}^{[l]} \ \vdots \ z^{[l+1]}_{n^{[l+1]}} &= \sum_{k=1}^{n^{[l]}} w^{[l+1]}_{n^{[l+1]}k} a_k^{[l]} = w^{[l+1]}_{n^{[l+1]}1} a_1^{[l]} + \cdots + w^{[l+1]}_{n^{[l+1]}j} a_j^{[l]} \cdots + w^{[l+1]}_{n^{[l+1]}n^{[l]}} a_{n^{[l]}}^{[l]} \ \end{aligned} \tag{16} z1[l+1]z2[l+1]zn[l+1][l+1]=k=1n[l]w1k[l+1]ak[l]=w11[l+1]a1[l]++w1j[l+1]aj[l]+w1n[l][l+1]an[l][l]=k=1n[l]w2k[l+1]ak[l]=w21[l+1]a1[l]++w2j[l+1]aj[l]+w2n[l][l+1]an[l][l]=k=1n[l]wn[l+1]k[l+1]ak[l]=wn[l+1]1[l+1]a1[l]++wn[l+1]j[l+1]aj[l]+wn[l+1]n[l][l+1]an[l][l](16)
那么 ∂ z [ l + 1 ] ∂ a j [ l ] \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} aj[l]z[l+1]计算为:
∂ z [ l + 1 ] ∂ a j [ l ] = ∂ z 1 [ l + 1 ] ∂ a j [ l ] + ∂ z 2 [ l + 1 ] ∂ a j [ l ] + ⋯ + ∂ z n [ l + 1 ] [ l + 1 ] ∂ a j [ l ] = w 1 j [ l + 1 ] + w 2 j [ l + 1 ] + ⋯ + w n [ l + 1 ] j [ l + 1 ] (17) \begin{aligned} \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} &= \frac{\partial z_1^{[l+1]}}{\partial a^{[l]}_j} + \frac{\partial z_2^{[l+1]}}{\partial a^{[l]}_j} + \cdots + \frac{\partial z_{n^{[l+1]}}^{[l+1]}}{\partial a^{[l]}_j} \ &= w^{[l+1]}_{1j} + w^{[l+1]}_{2j} + \cdots + w^{[l+1]}_{n^{[l+1]}j} \ \end{aligned} \tag{17} aj[l]z[l+1]=aj[l]z1[l+1]+aj[l]z2[l+1]++aj[l]zn[l+1][l+1]=w1j[l+1]+w2j[l+1]++wn[l+1]j[l+1](17)
如果同时考虑 δ [ l + 1 ] \delta ^{[l+1]} δ[l+1]的话,有:
∂ C ∂ z [ l + 1 ] ⋅ ∂ z [ l + 1 ] ∂ a j [ l ] = δ 1 [ l + 1 ] ⋅ ∂ z 1 [ l + 1 ] ∂ a j [ l ] + δ 2 [ l + 1 ] ⋅ ∂ z 2 [ l + 1 ] ∂ a j [ l ] + ⋯ + δ n [ l + 1 ] [ l + 1 ] ⋅ ∂ z n [ l + 1 ] [ l + 1 ] ∂ a j [ l ] = ∑ k = 1 n [ l + 1 ] δ k [ l + 1 ] ⋅ w k j [ l + 1 ] (18) \begin{aligned} \frac{\partial C}{\partial z^{[l+1]}} \cdot \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} &= \delta_1^{[l+1]} \cdot \frac{\partial z_1^{[l+1]}}{\partial a^{[l]}_j} + \delta_2^{[l+1]} \cdot\frac{\partial z_2^{[l+1]}}{\partial a^{[l]}_j} + \cdots + \delta_{n^{[l+1]}}^{[l+1]} \cdot\frac{\partial z_{n^{[l+1]}}^{[l+1]}}{\partial a^{[l]}_j}\ &= \sum_{k=1}^{n^{[l+1]}} \delta_k^{[l+1]} \cdot w^{[l+1]}_{kj} \end{aligned} \tag{18} z[l+1]Caj[l]z[l+1]=δ1[l+1]aj[l]z1[l+1]+δ2[l+1]aj[l]z2[l+1]++δn[l+1][l+1]aj[l]zn[l+1][l+1]=k=1n[l+1]δk[l+1]wkj[l+1](18)
( 15 ) 、 ( 18 ) (15)、(18) (15)(18)代入公式 ( 14 ) (14) (14)有:

Var ( δ [ l ] ) = Var ( ∂ C ∂ z [ l ] ) = Var ( ∂ C ∂ z j [ l ] ) = Var ( ∂ C ∂ z [ l + 1 ] ⋅ ∂ z [ l + 1 ] ∂ a j [ l ] ⋅ ∂ a j [ l ] ∂ z j [ l ] ) = Var ( ∑ k = 1 n [ l + 1 ] δ k [ l + 1 ] ⋅ w k j [ l + 1 ] ) = ∑ k = 1 n [ l + 1 ] Var ( δ k [ l + 1 ] ) ⋅ Var ( w k j [ l + 1 ] ) = ∑ k = 1 n [ l + 1 ] Var ( W [ l + 1 ] ) Var ( δ [ l + 1 ] ) = n [ l + 1 ] Var ( W [ l + 1 ] ) Var ( δ [ l + 1 ] ) (19) \begin{aligned} \text{Var}(\delta^{[l]} ) = \text{Var}(\frac{\partial C}{\partial z^{[l]}}) &= \text{Var}(\frac{\partial C}{\partial z^{[l]}_j}) \ &= \text{Var}\left( \frac{\partial C}{\partial z^{[l+1]}} \cdot \frac{\partial z^{[l+1]}}{\partial a^{[l]}_j} \cdot \frac{\partial a^{[l]}_j}{\partial z^{[l]}_j} \right) \ &= \text{Var}\left( \sum_{k=1}^{n^{[l+1]}} \delta_k^{[l+1]} \cdot w^{[l+1]}_{kj} \right) \ &= \sum_{k=1}^{n^{[l+1]}} \text{Var} (\delta_k^{[l+1]}) \cdot \text{Var}(w^{[l+1]}_{kj}) \ &= \sum_{k=1}^{n^{[l+1]}} \text{Var} (W^{[l+1]}) \text{Var} (\delta ^{[l+1]}) \ &= n^{[l+1]} \text{Var} (W^{[l+1]}) \text{Var} (\delta ^{[l+1]}) \end{aligned} \tag {19} Var(δ[l])=Var(z[l]C)=Var(zj[l]C)=Var(z[l+1]Caj[l]z[l+1]zj[l]aj[l])=Vark=1n[l+1]δk[l+1]wkj[l+1]=k=1n[l+1]Var(δk[l+1])Var(wkj[l+1])=k=1n[l+1]Var(W[l+1])Var(δ[l+1])=n[l+1]Var(W[l+1])Var(δ[l+1])(19)

即为了让 z [ l ] z^{[l]} z[l] z [ l + 1 ] z^{[l+1]} z[l+1]上的梯度方差保持一致,需要有
Var ( W [ l ] ) = 1 n [ l ] \text{Var}(W^{[l]}) = \frac{1}{n^{[l]}} Var(W[l])=n[l]1
n [ l ] n^{[l]} n[l]​该层的输出数量。

如果我们考虑整个网络,并用 L L L代表输出层, x x x代表输入向量。那么输出的梯度和输入的梯度的关系为:
Var ( δ [ 0 ] ) = Var(x) = n [ 1 ] Var ( W [ 1 ] ) Var ( δ [ 1 ] ) = n [ 1 ] Var ( W [ 1 ] ) n [ 2 ] Var ( W [ 2 ] ) Var ( δ [ 2 ] ) = ⋯ = [ ∏ l = 1 L n [ l ] Var ( W [ l ] ) ] Var ( δ [ L ] ) (20) \begin{aligned} \text{Var}(\delta^{[0]} ) &= \text{Var(x)} \ &= n^{[1]} \text{Var} (W^{[1]}) \text{Var} (\delta ^{[1]}) \ &= n^{[1]} \text{Var} (W^{[1]}) n^{[2]} \text{Var} (W^{[2]}) \text{Var} (\delta ^{[2]}) \ &= \cdots \ &= \left[ \prod_{l=1}^{L} n^{[l]}\text{Var} (W^{[l]})\right] \text{Var}(\delta^{[L]}) \end{aligned} \tag{20} Var(δ[0])=Var(x)=n[1]Var(W[1])Var(δ[1])=n[1]Var(W[1])n[2]Var(W[2])Var(δ[2])==[l=1Ln[l]Var(W[l])]Var(δ[L])(20)
通常网络层的输入和输出大小不一致,为了同时考虑输入( 1 n [ l − 1 ] \frac{1}{n^{[l-1]}} n[l1]1)和输出( 1 n [ l ] \frac{1}{n^{[l]}} n[l]1)大小,作者使用了它们的调和平均数:
Var ( W [ l ] ) = 2 1 1 / n [ l − 1 ] + 1 1 / n [ l ] = 2 n [ l − 1 ] + n [ l ] (21) \text{Var}(W^{[l]}) = \frac{2}{\frac{1}{1/n^{[l-1]}} + \frac{1}{1/n^{[l]}}} = \frac{2}{n^{[l-1]} + n^{[l]}} \tag {21} Var(W[l])=1/n[l1]1+1/n[l]12=n[l1]+n[l]2(21)
证明完毕。

为了简单(公式不好敲),后面用 n i n n_{in} nin n o u t n_{out} nout分别表示某层的输入和输出大小。

若从均匀分布 U [ − a , a ] U[-a, a] U[a,a]中生成权重参数,那么这里的
a = 6 n i n + n o u t (22) a = \sqrt{\frac{6}{n_{in} + n_{out}}} \tag{22} a=nin+nout6 (22)
因为均匀分布的方差为
( a − ( − a ) ) 2 12 = 4 a 2 12 = a 2 3 (23) \frac{\left(a-(-a)\right)^2}{12} = \frac{4a^2}{12} = \frac{a^2}{3} \tag{23} 12(a(a))2=124a2=3a2(23)
令方差等于上面的调和平均数有:
a 2 3 = 2 n i n + n o u t a 2 = 6 n i n + n o u t a = 6 n i n + n o u t (24) \begin{aligned} \frac{a^2}{3} &= \frac{2}{n_{in} + n_{out}} \ a^2 &= \frac{6}{n_{in} + n_{out}} \ a &= \sqrt{\frac{6}{n_{in} + n_{out}}} \end{aligned} \tag{24} 3a2a2a=nin+nout2=nin+nout6=nin+nout6 (24)

虽然在上面的推理中,我们假设激活函数为恒等函数(“不存在非线性”)在神经网络中很容易被违反, 但Xavier初始化方法在实践中被证明是有效的。

继续上面的实验,我们采用Xavier初始化方法,这里输入和输出大小一致,因此取 1 n i n \frac{1}{n_{in}} nin1就可以了:

w = np.random.randn(node_num, node_num) * np.sqrt(1.0 / node_num)

可以看到,输出值在5层之后依然保持良好的分布。我们这里使用的激活函数为sigmoid,那如果换成ReLU会怎样呢?

 z = ReLU(a)

前面几层看起来还可以,随着层数的加深,偏向一点点变大。当层加深后,激活值的偏向变大,就容易出现梯度消失的问题。

那么怎么办呢?Kaiming初始化的提出就是为了解决这个问题。

Kaiming初始化

Kaiming初始化是由何凯明大神提出的,又称为He初始化。主要针对ReLU激活函数:
z [ l ] = W [ l ] a [ l − 1 ] a [ l ] = ReLU ( z [ l ] ) (25) \begin{aligned} z^{[l]} &= W^{[l]} a^{[l-1]} \ a^{[l]} &= \text{ReLU} (z^{[l]}) \end{aligned} \tag {25} z[l]a[l]=W[l]a[l1]=ReLU(z[l])(25)

基于上面公式 ( 4 ) ( 6 ) (4)(6) (4)(6),有:
Var ( z k [ l ] ) = ∑ j = 1 n [ l − 1 ] Var ( w k j [ l ] a j [ l − 1 ] ) = ∑ j = 1 n [ l − 1 ] Var ( W [ l ] a [ l − 1 ] ) = n [ l − 1 ] [ E [ W [ l ] ] 2 Var ( a [ l − 1 ] ) + Var ( W [ l ] ) E [ a [ l − 1 ] ] 2 + Var ( W [ l ] ) Var ( a [ l − 1 ] ) ] = n [ l − 1 ] [ Var ( W [ l ] ) E [ a [ l − 1 ] ] 2 + Var ( W [ l ] ) Var ( a [ l − 1 ] ) ] = n [ l − 1 ] Var ( W [ l ] ) [ E [ a [ l − 1 ] ] 2 + Var ( a [ l − 1 ] ) ] (26) \begin{aligned} \text{Var}(z^{[l]}_k) &=\sum_{j=1}^{n^{[l-1]}} \text{Var}(w^{[l]}_{kj} a_j^{[l-1]}) \ &= \sum_{j=1}^{n^{[l-1]}} \text{Var}(W^{[l]} a^{[l-1]}) \ &= n^{[l-1]} \left[ E[W^{[l]}]^2 \text{Var}(a^{[l-1]}) + \text{Var}(W^{[l]})E[a^{[l-1]}]^2 + \text{Var}(W^{[l]})\text{Var}(a^{[l-1]})\right]\ &= n^{[l-1]} \left[ \text{Var}(W^{[l]})E[a^{[l-1]}]^2 + \text{Var}(W^{[l]})\text{Var}(a^{[l-1]})\right]\ &= n^{[l-1]} \text{Var}(W^{[l]}) \left[ E[a^{[l-1]}]^2 + \text{Var}(a^{[l-1]})\right]\ \end{aligned} \tag {26} Var(zk[l])=j=1n[l1]Var(wkj[l]aj[l1])=j=1n[l1]Var(W[l]a[l1])=n[l1][E[W[l]]2Var(a[l1])+Var(W[l])E[a[l1]]2+Var(W[l])Var(a[l1])]=n[l1][Var(W[l])E[a[l1]]2+Var(W[l])Var(a[l1])]=n[l1]Var(W[l])[E[a[l1]]2+Var(a[l1])](26)
再根据方差的公式:
Var ( X ) = E [ X 2 ] − E [ X ] 2 ⇒ E [ X 2 ] = Var ( X ) + E [ X ] 2 (27) \text{Var}(X) = E[X^2] - E[X]^2 \quad \Rightarrow \quad E[X^2] = \text{Var}(X) + E[X]^2 \tag{27} Var(X)=E[X2]E[X]2E[X2]=Var(X)+E[X]2(27)
公式 ( 26 ) (26) (26)可以转换为:
Var ( z [ l ] ) = n [ l − 1 ] Var ( W [ l ] ) E [ ( a [ l − 1 ] ) 2 ] (28) \text{Var}(z^{[l]}) = n^{[l-1]} \text{Var}(W^{[l]}) E[(a^{[l-1]})^2] \tag {28} Var(z[l])=n[l1]Var(W[l])E[(a[l1])2](28)
这里 E [ ( a [ l − 1 ] ) 2 ] ≠ Var ( a [ l − 1 ] ) E[(a^{[l-1]})^2] \neq \text{Var}(a^{[l-1]}) E[(a[l1])2]=Var(a[l1]),因为ReLU函数的均值不为零。

如果我们考虑 ( 25 ) (25) (25)式中的激活函数,即 a [ l − 1 ] = ReLU ( z [ l − 1 ] ) = max ⁡ ( 0 , z [ l − 1 ] ) a^{[l-1]} = \text{ReLU} (z^{[l-1]}) = \max(0,z^{[l-1]}) a[l1]=ReLU(z[l1])=max(0,z[l1])

为了看起来不别扭,我们用 x x x代替 a [ l − 1 ] a^{[l-1]} a[l1],用 y y y代替 z [ l − 1 ] z^{[l-1]} z[l1],有:

E [ ( a [ l − 1 ] ) 2 ] = E [ x 2 ] = ∫ − ∞ + ∞ x 2 P ( x )   d x = ∫ − ∞ + ∞ max ⁡ ( 0 , y ) 2 P ( y )   d y = ∫ 0 + ∞ y 2 P ( y )   d y = 0.5 × ∫ − ∞ + ∞ y 2 P ( y )   d y = 0.5 × E [ y 2 ] = 0.5 × Var ( y ) (29) \begin{aligned} E[(a^{[l-1]})^2] = E[x^2] &= \int_{-\infty}^{+\infty} x^2 P(x) \,{\rm d}x \ &= \int_{-\infty}^{+\infty} \max(0,y)^2P(y)\,{\rm d}y \ &= \int_{0}^{+\infty} y^2P(y)\,{\rm d}y \ &= 0.5 \times \int_{-\infty}^{+\infty} y^2P(y)\,{\rm d}y \ &= 0.5 \times E[y^2] \ &= 0.5 \times \text{Var}(y) \ \end{aligned} \tag{29} E[(a[l1])2]=E[x2]=+x2P(x)dx=+max(0,y)2P(y)dy=0+y2P(y)dy=0.5×+y2P(y)dy=0.5×E[y2]=0.5×Var(y)(29)
上式最后几步基于 W W W的均值为零,所以 E ( z ) = E ( W a ) = E ( W ) E ( a ) = 0 = E ( y ) E(z) = E(Wa) = E(W)E(a) = 0 = E(y) E(z)=E(Wa)=E(W)E(a)=0=E(y)

所以,由公式 ( 27 ) (27) (27),有 E ( y 2 ) = Var ( y ) E(y^2) = \text{Var}(y) E(y2)=Var(y)

x , y x,y x,y用原来的式子表示,并将 ( 29 ) (29) (29)代入式 ( 28 ) (28) (28)​得:
Var ( z [ l ] ) = n [ l − 1 ] Var ( W [ l ] ) E [ ( a [ l − 1 ] ) 2 ] = 1 2 n [ l − 1 ] Var ( W [ l ] ) Var ( z [ l − 1 ] ) (30) \begin{aligned} \text{Var}(z^{[l]}) &= n^{[l-1]} \text{Var}(W^{[l]}) E[(a^{[l-1]})^2] \ &= \frac{1}{2}n^{[l-1]} \text{Var}(W^{[l]}) \text{Var}(z^{[l-1]}) \end{aligned} \tag{30} Var(z[l])=n[l1]Var(W[l])E[(a[l1])2]=21n[l1]Var(W[l])Var(z[l1])(30)
为了让 z [ l ] z^{[l]} z[l] z [ l − 1 ] z^{[l-1]} z[l1]的方差一致,需要有:
1 2 n [ l − 1 ] Var ( W [ l ] ) = 1 (31) \frac{1}{2}n^{[l-1]} \text{Var}(W^{[l]}) = 1 \tag{31} 21n[l1]Var(W[l])=1(31)

Var ( W [ l ] ) = 2 n [ l − 1 ] (32) \text{Var}(W^{[l]}) = \frac{2}{n^{[l-1]}} \tag{32} Var(W[l])=n[l1]2(32)
类似的,计算反向传播(注意要考虑ReLU的导数)可以得到
Var ( W [ l ] ) = 2 n [ l ] (33) \text{Var}(W^{[l]}) = \frac{2}{n^{[l]}} \tag{33} Var(W[l])=n[l]2(33)
但是Kaiming初始化没有像Xaiver初始化那样取两者的调和平均数,而是根据需要任取一个即可,就像Pytorch的实现那样根据需要取输入还是输出大小。

同理如果采用均匀分布 U [ − a , a ] U[-a, a] U[a,a]的话,那么 a = 6 n a=\sqrt\frac{6}{n} a=n6 ,这里 n n n要么是输入大小,要么是输出大小。

继续上面的实验,采用He初始化方法:

w = np.random.randn(node_num, node_num) * np.sqrt(2.0 / node_num)

而当初始值为He初始值时,各层中分布的广度相同。由于即便层加深,数据的广度也能保持不变,因此逆向传播时,也会传递合适的值。

代码实现

代码实现就很简单了,代码地址: 👉 https://github.com/nlp-greyfoss/metagrad

class Linear(Module):
    r"""
         对给定的输入进行线性变换: :math:`y=xA^T + b`

        Args:
            in_features: 每个输入样本的大小
            out_features: 每个输出样本的大小
            bias: 是否含有偏置,默认 ``True``
        Shape:
            - Input: `(*, H_in)` 其中 `*` 表示任意维度,包括none,这里 `H_{in} = in_features`
            - Output: :math:`(*, H_out)` 除了最后一个维度外,所有维度的形状都与输入相同,这里H_out = out_features`
        Attributes:
            weight: 可学习的权重,形状为 `(out_features, in_features)`.
            bias:   可学习的偏置,形状 `(out_features)`.
        """

    def __init__(self, in_features: int, out_features: int, bias: bool = True) -> None:
        super(Linear, self).__init__()
        self.in_features = in_features
        self.out_features = out_features

        self.weight = Parameter(Tensor.empty((out_features, in_features)))
        if bias:
            self.bias = Parameter(Tensor.zeros(out_features))
        else:
            self.bias = None
        self.reset_parameters()

    def reset_parameters(self) -> None:
        init.kaiming_normal_(self.weight)  # 默认采用kaiming初始化

    def forward(self, input: Tensor) -> Tensor:
        x = input @ self.weight.T
        if self.bias is not None:
            x = x + self.bias

        return x

我们通过调用实现的kaiming_normal_就可以采用Kaiming初始化了。

References
  1. 深度学习入门
  2. DIVE INTO DEEP LEARNING
  3. 理解神经网络的权重初始化
  4. Initializing neural networks

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/langs/722721.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-04-26
下一篇2022-04-26

发表评论

登录后才能评论

评论列表(0条)

    保存