The autonomous reliable flight of Unmanned Aerial Vehicle (UAV) is closely related to the motion situations and the accuracy of continuous positioning and navigation. This project is aimed to solve the key problems of the continuous reliable positioning and navigation for UAV during an autonomous flight. It takes account of UAV and its space environment as a whole, then exploits the consistent information from multi-sensor data and builds adaptive fusion models according to different motion situations. First of all, the internal relationship between the complementarity and diversity of multi-sensor data and UAV motion situations is analyzed. The fusion model for UAV in different environments and different motion states is built. It reveals its application discipline, mines the common information of the multi-source data and realizes the multi-level modeling. Secondly, explore the constraint laws between environmental visual features and the motion states in different environments, and mine the common laws between multi-sensor data and environmental visual features. The multi-layer perception of visual features and the error optimization algorithm for the visual positioning aided by multi-source information is proposed to realize the visual continuous positioning and navigation. Finally, the distributed robust adaptive fusion model based on multi-source dynamic information is developed according to the disciplines for the application of multi-source information and the continuity of the navigation states, and further the adaptive dynamic positioning and navigation algorithm incompletely based on GNSS information is obtained. The research results will provide theoretical guidance and practical methods for the application of UAV in complex environments.
无人机自主可靠飞行与其运动情境和连续导航定位精度密切相关。本项目将无人机运动与空间环境作为整体,挖掘多传感器数据中一致性信息并构建不同运动情境下多源信息自适应融合模型来解决无人机连续可靠定位导航中的关键问题。首先,分析并探究多传感器数据的互补性和多样性与无人机运动情境间的内在关系,构建不同环境、不同运动状态下无人机的运动模型,揭示其模型适用性规律,实现多源数据共性信息挖掘与多层次建模;其次,探究不同环境下无人机运动状态和视觉特征变化间的约束关系,挖掘多传感器数据与环境视觉特征的共性规律,提出视觉特征的多层次感知和多源挖掘信息辅助的视觉定位误差优化算法,实现视觉导航连续定位;最后,根据多源信息适用性和导航状态连续性规律,构建多源动态信息分布式融合模型,研究抗差自适应滤波算法,形成不完全依赖GNSS信息的无人机自适应动态定位导航算法。研究成果将为无人机在复杂环境中应用提供理论指导和实用方法。
本项目将无人机运动与空间环境作为整体,挖掘多传感器数据中一致性信息并构建不同运动情境下多源信息自适应融合模型来解决无人机连续可靠定位导航中的关键问题,研究成果已在农业、测绘、公安和军用等领域无人机自主导航控制系统中应用。本项目主要完成的工作和取得的成果有:(1)研究了惯性、相机和激光雷达等传感器的标定技术,提出了基于灭点法的相机无依托标定方法和基于AR tag的相机和激光雷达标定方法,研发了多传感器快速标定软件,实现了相机的快速标定;(2)构建了多源信息辅助的视觉定位导航弹性优化模型,发展了点线特征结合的视觉惯性里程计方法、激光惯性视觉同步定位与建图方法和惯性/视觉/激光雷达融合的视觉定位导航方法,完成了室内外、地上下、不同光照条件下的推车、行人和跑车实验测试,绝对定位误差优于1米,相对定位优于5‰,研发了惯性/GNSS/气压计/磁强计/视觉弹性融合定位导航工程样机并应用于无人机自主定位导航控制中,实现了无人机在无GNSS信号条件下连续、安全可靠飞行;(3)通过挖掘多传感器数据中的规律性、互补性和一致性信息,分别构建了车辆和无人机运动情境信息动态感知模型、车辆和无人机典型运动情境函数模型与多运动约束模型和传感器误差动态估计与补偿模型,形成了运动情境感知信息辅助的无人车/无人机多源信息自适应融合定位导航方法,解决了复杂环境下载体连续韧性导航问题;(4)攻克了无线信号指纹定位的共性基础问题,突破了无线信号基站异构性与定位边界是直线的技术缺陷,率先建立了以圆形边界为核心的无线信号定位技术理论框架,并形成了顾及实际应用环境下信号非高斯特性的基站最优选取和指纹识别定位新方法和系统,相关成果在2022年北京冬奥会场馆中进行了测试应用;(5)构建了飞行器多源信息融合导航完好性及飞行安全监测框架,研究了多源信息融合完好性方法,实现了基于数字高程模型的飞行器飞行安全监测。
{{i.achievement_title}}
数据更新时间:2023-05-31
论大数据环境对情报学发展的影响
基于SSVEP 直接脑控机器人方向和速度研究
低轨卫星通信信道分配策略
基于多模态信息特征融合的犯罪预测算法研究
惯性约束聚变内爆中基于多块结构网格的高效辐射扩散并行算法
基于观测序列反馈的多源导航信息融合方法研究
基于分布式、多源导航信息动态融合的合作定位机制研究
多层次环境感知信息辅助的行人动态融合导航定位算法研究
基于多源信息融合的稻瘟病态势感知方法研究