0%

ReLu激活函数

概述

ReLU函数的计算是在卷积之后进行的,因此它与tanh函数和sigmoid函数一样,同属于非线性激活函数。ReLU函数的倒数在正数部分是恒等于1的,因此在深度网络中使用relu激活函数就不会导致梯度小时和爆炸的问题。并且,ReLU函数计算速度快,加快了网络的训练。不过,如果梯度过大,导致很多负数,由于负数部分值为0,这些神经元将无法激活(可通过设置较小学习率来解决)。

ReLu

1.ReLu

ReLU(x)=max(x,0)={0,x<0x,x>0\operatorname{Re} \mathrm{LU}(\mathrm{x})=\max (\mathrm{x}, 0)=\left\{\begin{array}{l}{0, x<0} \\ {x, x>0}\end{array}\right.

后向推导过程:设ll层输出为zlz^l,经过激活函数后的输出为zl+1z^{l+1};记损失函数L关于第ll层的输出zlz^l的偏导为δl=Lzl\delta^{l}=\frac{\partial L}{\partial z^{l}},则损失函数L关于第ll层的偏导为:

δl=Lzl+1zl+1zl=δl+1ReLU(zl)zl=δl+1{1zl>00zl<=0\delta^{l}=\frac{\partial L}{\partial z^{l+1}} \frac{\partial z^{l+1}}{\partial z^{l}} \\ =\delta^{l+1} \frac{\partial \operatorname{Re} L U\left(z^{l}\right)}{\partial z^{l}} \\ =\delta^{l+1}\left\{\begin{array}{ll}{1} & {z^{l}>0} \\ {0} & {z^{l}<=0}\end{array}\right.


2.LeakReLU

LeakReLU(z)={zz>0αzz<=0,α=0.1LeakReLU(z)=\left\{\begin{array}{ll}{z} & {z>0} \\ {\alpha z} & {z<=0, \alpha=0.1}\end{array}\right.

在负数部分给予一个小的梯度。由Relu可知损失函数L关于第ll层的偏导为:

δl={δl+1zl>0αδl+1zl<=0,α=0.1\delta^{l}=\left\{\begin{array}{ll}{\delta^{l+1}} & {z^{l}>0} \\ {\alpha \delta^{l+1}} & {z^{l}<=0, \alpha=0.1}\end{array}\right.


3.PReLU

表达式与LeakReLu相同,只不过α\alpha可以学习。损失函数L关于参数α\alpha的偏导为:

Lα=Lzl+1zl+1αδl=δl+1PReLU(zl)αδl=δl+1{0zl>0zlzl<=0δl={0zl>0δl+1zlzl<=0\frac{\partial L}{\partial \alpha}=\frac{\partial L}{\partial z^{l+1}} \frac{\partial z^{l+1}}{\partial \alpha}\\ \delta^{l}=\delta^{l+1} \frac{\partial P R e L U\left(z^{l}\right)}{\partial \alpha} \\ \delta^{l}=\delta^{l+1}\left\{\begin{array}{ll}{0} & {z^{l}>0} \\ {z^{l}} & {z^{l}<=0}\end{array}\right. \\ \delta^{l}=\left\{\begin{array}{ll}{0} & {z^{l}>0} \\ {\delta^{l+1} z^{l}} & {z^{l}<=0}\end{array}\right.


4.ELU

f(z)={zz>0α(exp(z)1)z0f(z)=\left\{\begin{array}{ll}{z} & {z>0} \\ {\alpha(\exp (z)-1)} & {z \leq 0}\end{array}\right.

由LeakRelu可知损失函数L关于第ll层的偏导为:

δl={δl+1zl>0αδl+1exp(zl)zl<=0\delta^{l}=\left\{\begin{array}{ll}{\delta^{l+1}} & {z^{l}>0} \\ {\alpha \delta^{l+1} \exp \left(z^{l}\right)} & {z^{l}<=0}\end{array}\right.


5.SELU

SELU(z)=λ{zz>0α(exp(z)1)z<=0\operatorname{SELU}(z)=\lambda\left\{\begin{array}{ll}{z} & {z>0} \\ {\alpha(\exp (z)-1)} & {z<=0}\end{array}\right.

由ELU可知损失函数L关于第ll层的偏导为:

δl=λ{δl+1zl>0αδl+1exp(zl)zl<=0\delta^{l}=\lambda\left\{\begin{array}{ll}{\delta^{l+1}} & {z^{l}>0} \\ {\alpha \delta^{l+1} \exp \left(z^{l}\right)} & {z^{l}<=0}\end{array}\right.


总结

当激活值的均值非0时,就会对下一层造成一个bias,如果激活值之间不会相互抵消(即均值非0),会导致下一层的激活单元有bias shift。如此叠加,单元越多时,bias shift就会越大。除了ReLU,其它激活函数都将输出的平均值接近0,从而加快模型收敛,类似于Batch Normalization的效果,但是计算复杂度更低。虽然LeakReLU和PReLU都也有负值,但是它们不保证在不激活状态下(就是在输入为负的状态下)对噪声鲁棒。反观ELU在输入取较小值时具有软饱和的特性,提升了对噪声的鲁棒性。

题图代码

Python Relu Code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
## 公众号:深度学习视觉
## Author:Fain
## Blog:Fainke.com
#
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
sns.set(style="darkgrid")
#
fig = plt.figure(figsize=(12,6))
#
plt.xlim([-10, 10]);
plt.ylim([-1, 1.6]);
#
# 定义数值
x = np.sort(np.linspace(-10,10,1000))
#
# ReLu 函数
relu= [max(item,0) for item in x]
#
# LeakReLu函数
alpha = 0.1
leakRelu = [item if item>0 else item*alpha for item in x]
#
# PReLu函数
alpha = 0.1 # 可以学习的参数
leakRelu = [item if item>0 else item*alpha for item in x]
#
# ELU函数
alpha = 0.2
elu = [item if item>0 else (np.exp(item)-1)*alpha for item in x]
#
# SELU函数
alpha = 1
r = 0.5
selu = [item if item>0 else (np.exp(item)-1)*alpha for item in x]
selu = list(map(lambda x:x*r,selu))
#
# 绘图
plt.plot(x,relu,color="#ff0000", label = r"ReLu", marker='*')
plt.plot(x,leakRelu,color="#0000ff", label = r"LeakReLu")
plt.plot(x,elu,color="#00ff00", label = r"ELU")
plt.plot(x,selu,color="#00ffee", label = r"SELU")

plt.legend(prop={'family' : 'Times New Roman', 'size' : 16})
plt.show()
-------------本文结束感谢您的阅读-------------

欢迎关注我的其它发布渠道