The ability to learn continually is one of the key competences of the brain. It ensures intelligent agents learn new skills/knowledge without forgetting old ones. Therefore, continual learning has being an important topic in neuroscience and artificial intelligence. Scientists from neuroscience have carried out detailed researches in the experiments, but systematic theories are still not available. Researchers from the field of AI have proposed much algorithm for the continual learning, but the artificial neural network they used lack the dynamical feature of the real neuron. This project will combine the above two aspects by the tools from nonlinear physics and theory of complex networks, and come up with possible dynamics to enable continual learning in real neural systems. Two requirements must be satisfied: (1) The proposed dynamics do enable a dynamical neural network to learn continually; (2) All calculation in the learning process can only be powered by neurons themselves. On the one hand, we will apply the algorithms from AI to reservoir neural networks to learn some cognitive tasks in the sequential way and test whether they work. Based the analysis of the learning process by the mean field theory we will propose a new mechanism satisfied all above requirements of continual learning in the dynamical neural systems. On the other hand, we will analyze the data of neurological experiments to explore the evolution of the weights and structures of neural networks in the learning process. We will further improve the proposed dynamic mechanism by these experimental results. The projects will not only help us to further understand how the brain achieves the ability of continuous learning, but also shed light to study of continual learning in AI.
连续学习是大脑的一项核心能力,保证了个体在学习新技能/知识的同时不会忘记旧的技能/知识。因此,连续学习是神经科学领域和人工智能领域的重要课题。前者在实验上已对该课题进行了细致的研究,但缺少系统的理论刻画;后者则侧重于算法上的实现,但采用的人工神经网络不具备真实神经元特征的动力学。本项目将从物理的角度把这两方面的探索有机的结合起来,给出神经网络在动力学上可行的连续学习机制。一方面,我们将借鉴人工智能领域中现有算法,在兼具生物和人工神经网络特征的水池神经网络上测试这些方案的可行性,利用平均场理论分析连续学习的动力学过程,提出满足上述要求的动力学机制。另一方面,我们将分析神经认知实验的时序数据,研究神经元权重和神经网络拓扑性质在连续学习过程中的演变规律,并对比实验证据和理论结果,进一步改进所提出的动力学机制。本研究将帮助我们进一步理解大脑的连续学习机理,同时也将启发类脑智能的新算法、新思路。
连续学习是大脑的一项核心能力,保证了个体在学习新技能/知识的同时不会忘记旧的技能/知识。因此,连续学习是神经科学领域和人工智能领域的重要课题。在本项目中,我们开展了主流连续学习算法的鲁棒性测试工作。我们的工作表明现在的连续学习框架和任务设计还存在一定的局限性,不能完全反映现实环境对连续学习能力的需求,为连续学习研究提供新的视角,同时暗示大脑存在不同的动力学机制实现连续学习。我们成功构建了正交投影连续学习算法的在SNN上的实现机制,同时又设计了具生物可行性的神经网络权重更新算法。在后续的工作中,将二者相结合和改进,可以设计完全基于动力学实现连续学习机制,为实现类人的并满足真实环境连续学习需求的算法打下基础。我们提出了SEA-net,可以赋予人工神经网络符号创造的能力。SEA-net框架可以广泛适用于不同类型的深度神经网络,实现具身的符号生成和操作。SEA-net可以集成连接主义和符号主义人工智能模型的优势,实现强大的符号神经网络系统。该工作也为人工神经网络像大脑那样符号化记忆的生成与存储提供了新思路。
{{i.achievement_title}}
数据更新时间:2023-05-31
玉米叶向值的全基因组关联分析
正交异性钢桥面板纵肋-面板疲劳开裂的CFRP加固研究
硬件木马:关键问题研究进展及新动向
基于SSVEP 直接脑控机器人方向和速度研究
小跨高比钢板- 混凝土组合连梁抗剪承载力计算方法研究
连续和离散忆阻神经网络的动力学分析与控制
Spiking神经网络学习算法研究
连续吸引子的学习算法研究
过程神经网络的智能学习算法研究