激活函数

(重定向自激励函数

计算网络中, 一个节点的激活函数定义了该节点在给定的输入或输入的集合下的输出。标准的计算机芯片电路可以看作是根据输入得到(1)或(0)输出的數位電路激活函数。这与神经网络中的线性感知机的行为类似。然而,只有非線性激活函数才允許這種網絡僅使用少量節點來計算非平凡問題。 在人工神經網絡中,這個功能也被稱為傳遞函數

函數编辑

下表列出了幾個激活函数,它們的輸入為單一變量

名稱 函數圖形 方程式 導數 區間 Order of continuity 單調 Derivative Monotonic Approximates identity near the origin
恆等函數          
單位階躍函數          
邏輯函數 (也被稱為S函數)    [1]      
雙曲正切函數          
反正切函數          
Softsign函數[1][2]          
反平方根函數(ISRU)[3]          
線性整流函數(ReLU)          
帶泄露線性整流函數(Leaky ReLU)          
參數化線性整流函數(PReLU)[4]           Yes iff   Yes iff  
帶泄露隨機線性整流函數(RReLU)[5]    [2]      
指數線性函數(ELU)[6]           Yes iff   Yes iff   Yes iff  
擴展指數線性函數(SELU)[7]  

with   and  

     
S 型線性整流激活函數(SReLU)[8]  
  are parameters.
     
反平方根線性函數(ISRLU)[3]          
自適應分段線性函數(APL)[9]    [3]    
SoftPlus函數[10]          
彎曲恆等函數          
Sigmoid-weighted linear unit (SiLU)[11] (也被稱為Swish[12])  [4]  [5]    
SoftExponential函數[13]           Yes iff  
正弦函數          
Sinc函數          
高斯函數          
^ 此處H單位階躍函數
^ α是在訓練時間從均勻分佈中抽取的隨機變量,並且在測試時間固定為分佈的期望值
^ ^ ^ 此處 邏輯函數

下表列出了幾個激活函数,它們的輸入為多個變量

名稱 方程式 導數 區間 Order of continuity
Softmax函數      for i = 1, …, J  [6]    
Maxout函數[14]        

^ 此處δ克羅內克δ函數

參見编辑

參考資料编辑

  1. ^ Bergstra, James; Desjardins, Guillaume; Lamblin, Pascal; Bengio, Yoshua. Quadratic polynomials learn better image features". Technical Report 1337. Département d’Informatique et de Recherche Opérationnelle, Université de Montréal. 2009. (原始内容存档于2018-09-25). 
  2. ^ Glorot, Xavier; Bengio, Yoshua, Understanding the difficulty of training deep feedforward neural networks (PDF), International Conference on Artificial Intelligence and Statistics (AISTATS’10), Society for Artificial Intelligence and Statistics, 2010, (原始内容存档 (PDF)于2017-04-01) 
  3. ^ 3.0 3.1 Carlile, Brad; Delamarter, Guy; Kinney, Paul; Marti, Akiko; Whitney, Brian. Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs). 2017-11-09. arXiv:1710.09967  [cs.LG]. 
  4. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015-02-06. arXiv:1502.01852  [cs.CV]. 
  5. ^ Xu, Bing; Wang, Naiyan; Chen, Tianqi; Li, Mu. Empirical Evaluation of Rectified Activations in Convolutional Network. 2015-05-04. arXiv:1505.00853  [cs.LG]. 
  6. ^ Clevert, Djork-Arné; Unterthiner, Thomas; Hochreiter, Sepp. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). 2015-11-23. arXiv:1511.07289  [cs.LG]. 
  7. ^ Klambauer, Günter; Unterthiner, Thomas; Mayr, Andreas; Hochreiter, Sepp. Self-Normalizing Neural Networks. 2017-06-08. arXiv:1706.02515  [cs.LG]. 
  8. ^ Jin, Xiaojie; Xu, Chunyan; Feng, Jiashi; Wei, Yunchao; Xiong, Junjun; Yan, Shuicheng. Deep Learning with S-shaped Rectified Linear Activation Units. 2015-12-22. arXiv:1512.07030  [cs.CV]. 
  9. ^ Forest Agostinelli; Matthew Hoffman; Peter Sadowski; Pierre Baldi. Learning Activation Functions to Improve Deep Neural Networks. 21 Dec 2014. arXiv:1412.6830  [cs.NE]. 
  10. ^ Glorot, Xavier; Bordes, Antoine; Bengio, Yoshua. Deep sparse rectifier neural networks (PDF). International Conference on Artificial Intelligence and Statistics. 2011. (原始内容存档 (PDF)于2018-06-19). 
  11. ^ Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning. [2018-06-13]. (原始内容存档于2018-06-13). 
  12. ^ Searching for Activation Functions. [2018-06-13]. (原始内容存档于2018-06-13). 
  13. ^ Godfrey, Luke B.; Gashler, Michael S. A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks. 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management: KDIR. 2016-02-03, 1602: 481–486. Bibcode:2016arXiv160201321G. arXiv:1602.01321 . 
  14. ^ Goodfellow, Ian J.; Warde-Farley, David; Mirza, Mehdi; Courville, Aaron; Bengio, Yoshua. Maxout Networks. JMLR WCP. 2013-02-18, 28 (3): 1319–1327. Bibcode:2013arXiv1302.4389G. arXiv:1302.4389 .