In this project, we propose to address several issues related to vision-based bidirectional emotive interaction between a user and a social assistive robot, who can respond in an emotive manner, based on its perception of human's emotion. This requires the robot to be sensitive to human affective states, and to provide an appropriate empathic response at right time to best meet the user's emotional needs. Specifically, we first propose a speech enhanced facial action unit recognition method that utilizes both visual and acoustic cues. Second, we propose a new dynamic model inspired from Allen's interval algebra for facial behavior and body gesture perception and understanding. Third, we propose a decision theoretic framework for emotion assistance. Fourth, we propose a natural facial expression synthesis method for robots. ..The proposed bidirectional emotive human-robot interaction is essential for a robot to co-exist with its users and its workplace through multimodal perception and natural interaction. First, we enhance the generality and robust of facial action unit recognition method through exploiting the domain knowledge of temporal-spatial characters of facial muscles during speaking and expression changing, and thus facilitate the robot to coexist with its workplace. Second, we propose to employ Allen's interval algebra to capture the various dynamics of facial behavior and body gesture, and develop a new dynamic model for facial behavior and body gesture perception and understanding. This provides a novel solution to multimodal dynamic perception. Third, we propose a decision theoretic framework that can integrate multimodal measurements of user emotion, along with related contextual and personal information, to perform individualized user emotion recognition and to decide the optimal robot feedback. This greatly advances the process of human robot co-existence. Fourth, we propose a facial expression synthesis method for robot by leveraging the probabilistic dependencies among facial feature points, action units and expressions. It represents a novel idea for natural facial expression synthesis.
本项目提出基于多种视觉信号的社会辅助机器人双向情感交互研究,包括研究听觉信号增强的面部动作单元识别方法;研究基于Allen区间代数的动态面部行为和身体姿态感知和理解;研究用户情感辅助方法;研究机器人自然表情合成方法。. 本研究对于人-机-环境多模态感知与自然交互具有重要的理论和现实意义:以表情变化和用户说话时,面部肌肉相互作用的时空特性为先验知识,以听觉信号增强面部动作单元识别,提高表情识别方法对成像环境的鲁棒性,促进机器人与成像环境的共融;采用Allen区间代数描述表情和身体姿态丰富的动态过程,提出基于Allen区间代数的动态面部行为和身体姿态感知和理解,为机器人对用户的多模态动态感知提供新方法;采用决策模型选择合适的辅助行动帮助用户调整情感状态,可促进机器人与人的共融;依据表情、面部动作单元和特征点之间概率依存关系控制机器人生成自然表情,为提高机器人合成表情的自然度提供新思路。
本项目提出基于多种视觉信号的社会辅助机器人双向情感交互研究,本研究对于人-机-环境多模态感知与自然交互具有重要的理论和现实意义:以表情变化时,面部肌肉相互作用的时空特性为先验知识,研究面部动作单元识别,提高表情识别方法对成像环境的鲁棒性,促进机器人与成像环境的共融;采用Allen区间代数描述表情变化时丰富的动态过程,提出基于Allen区间代数的动态面部行为感知和理解,为机器人对用户的多模态动态感知提供新方法;研究情感辅助方法,可促进机器人与人的共融;依据表情、面部动作单元和特征点之间概率依存关系生成自然表情,为提高合成表情的自然度提供新思路。在国内外高水平期刊和会议上共发表论文28篇,包括SCI期刊论文14篇,CCF A类期刊论文3篇,CCF A类会议论文10篇,获得情感计算领域国际比赛亚军3次,申请专利3项,获批软件著作权1项,参与组织相关国际会议2次,参与本项目的研究生获得中国科学院院长优秀奖1人次,国家奖学金12人次,优秀毕业生4人次,共培养硕士9名。
{{i.achievement_title}}
数据更新时间:2023-05-31
黄河流域水资源利用时空演变特征及驱动要素
基于SSVEP 直接脑控机器人方向和速度研究
基于公众情感倾向的主题公园评价研究——以哈尔滨市伏尔加庄园为例
居住环境多维剥夺的地理识别及类型划分——以郑州主城区为例
基于细粒度词表示的命名实体识别研究
基于用户交互特性的社会网络情感分析技术研究
有个性的仿人情感交互机器人研究与实验验证
自闭症辅助治疗机器人的认知情感计算及有效性研究
基于视听信息融合的情感机器人情感识别与情感建模研究