The main purpose of this project is to design feedforward neural networks with high fault tolerance and analyzes its deterministic convergence. Many researchers prefer to add noise, fault and regularization term in feedforward neural networks to improve the generalization. Fault tolerant learning method is then one of the popular issues for artificial neural networks. On the one hand, fault tolerant learning method is essentially one of stochastic algorithms, the present research focuses on the asymptotic convergence analysis which based on L2 regularizer. Thus, it is valuable to discuss the deterministic convergence of fault tolerant neural networks under some special conditions. On the other hand, L1/2 regularization method is one of the hot issues in international regularization field. Compare to L2 regularizer, L1/2 regularizer is easy to obtain sparse solutions. Then, it is interesting to design more efficient fault tolerant algorithms for feedforward neural networks based on L1/2 regularizer and study the corresponding deterministic convergence. This project attempts to analyze the impact of noise or fault on neural networks, presents the deterministic convergence of fault tolerant learning for feedforward neural networks based on L2 regularizer. To solve the singularity in the training process, two kinds of numerical schemes are considered such as smoothing function approximation and subgradient method. The monotony of error function and the boundedness of weights are studied of fault tolerant learning for neural networks, the deterministic convergence is then proved for different weight updating modes.
本项目旨在设计容错性更强的前馈神经网络学习算法并研究其确定型收敛性。 为提高网络泛化能力,人们常常在传统神经网络学习中加入噪声、故障及正则项,由此得到的容错学习算法是神经网络领域的研究热点之一。一方面,容错学习算法本质上是一种随机型算法,现有研究主要局限于基于L2正则子算法的渐近收敛性分析,能否得到算法在某些特定条件下的确定型收敛性是值得研究的问题。另一方面,L1/2 正则化理论是国际正则化领域的研究焦点之一,鉴于L1/2正则子较L2正则子具有更易产生稀疏解的优势,设计基于L1/2正则子的高效容错学习算法并研究其确定型收敛性是有意义的问题。 本项目拟研究不同噪声或故障对神经网络学习算法的影响,探讨基于L2正则子容错算法的确定型收敛性;借助光滑函数逼近或次梯度法解决L1/2正则子带来的奇点问题;研究基于L1/2正则子容错算法的误差函数单调性及权值有界性,给出不同权值更新模式下的确定收敛性。
容错神经网络在实际应用范围日益广泛,本项目主要设计容错性更强的前馈神经网络学习算法并研究其收敛性。为提高网络泛化能力,人们常常在传统神经网络学习中加入噪声、故障及正则项,由此得到的容错学习算法是神经网络领域的研究热点之一。主要内容包括:一方面,容错学习算法本质上是一种随机型算法,现有研究主要局限于基于L2 正则子算法的渐近收敛性分析,如何到算法在某些特定条件下的确定型收敛性值得研究。另一方面,L1/2 正则化理论是国际正则化领域的研究焦点之一,基于L1/2 正则子较L2 正则子具有更易产生稀疏解的优势,设计基于L1/2 正则子的高效容错学习算法并研究其确定型收敛性。本项目研究了基于不同噪声或故障对神经网络学习算法的影响,探讨了基于L2 正则子容错算法的收敛性;借助光滑函数逼近Group Lasso 和 L1/2 正则子带来的奇点问题;研究了基于L1/2 正则子容错算法的误差函数单调性及权值有界性,给出不同权值更新模式下的确定收敛性。
{{i.achievement_title}}
数据更新时间:2023-05-31
玉米叶向值的全基因组关联分析
监管的非对称性、盈余管理模式选择与证监会执法效率?
温和条件下柱前标记-高效液相色谱-质谱法测定枸杞多糖中单糖组成
宁南山区植被恢复模式对土壤主要酶活性、微生物多样性及土壤养分的影响
针灸治疗胃食管反流病的研究进展
前馈神经网络学习算法的设计与分析
BP神经网络在线学习算法的确定型收敛性
前馈型深度神经网络的奇异学习及算法研究
神经网络学习算法收敛性研究