16
0
0
2025-05-21

Special Issue on Embodied Multi-Modal Data Fusion for Robot Continuous Perception Submission Date: 2025-10-20 Embodied multi-modal data fusion represents a cutting-edge frontier in robotics, with the potential to revolutionize how robots perceive, understand, and interact with the world. By integrating diverse sensory modalities, it enables robots to operate autonomously and adaptively in dynamic, unstructured environments. As robots become increasingly integral to sectors such as healthcare, manufacturing, transportation, and services, the demand for robust, efficient, and intelligent perception systems is more critical than ever. Embodied multi-modal data fusion addresses these demands by leveraging state-of-the-art technologies—including sensor fusion, machine learning, and embodied cognition—to process complex sensory inputs, make real-time decisions, and adapt continuously to changing environments. This special issue on Embodied Multi-Modal Data Fusion for Robot Continuous Perception serves as a foundational resource, highlighting the field’s interdisciplinary nature and transformative potential. Covering topics such as multi-modal fusion algorithms, embodied cognition, and practical applications, it provides a comprehensive platform for researchers, engineers, and industry professionals to foster innovation and collaboration across disciplines.


Topics of interest:


We welcome submissions that present innovative theories, methodologies, and applications in embodied multi-modal data fusion for continuous robot perception.


Multi-Modal Data Fusion:


Novel approaches for integrating diverse sensory modalities, including vision, radar, audio, tactile, and proprioception;

Strategies for managing noisy, incomplete, or misaligned data in multi-modal fusion;

Cross-modal learning and representation techniques to improve robot perception accuracy and robustness.


Embodied Perception:


Robot perception systems that tightly integrate sensory inputs with robot kinematics, dynamics, and physical embodiment;

Context-aware perception frameworks enabling adaptive and task-specific robot behaviors;

Perception-action loops for real-time decision-making and interaction in dynamic environments.


Continuous Perception:


Real-time processing of multi-modal sensory data streams to ensure continuous and uninterrupted robot perception;

Temporal modeling techniques for dynamic environments, including spatiotemporal data fusion and sequential learning;

Energy-efficient and resource-constrained algorithms for continuous robot perception on edge or embedded systems.


Learning and Adaptation:


Self-supervised, unsupervised, and few-shot learning approaches for multi-modal robot perception

Techniques for lifelong learning and adaptation in robots operating in evolving environments.

Transfer learning and domain adaptation methods for cross-environment robot perception.


Guest editors:


Rui Fan, PhD

Tongji University, Shanghai, China

Email: rui.fan@ieee.org


Xuebo Zhang, PhD

Nankai University, Tianjin, China

Email: zhangxuebo@nankai.edu.cn


Hesheng Wang, PhD

Shanghai Jiao Tong University, Shanghai, China,

Email: wanghesheng@sjtu.edu.cn


George K. Giakos, PhD

Manhattan University, Riverdale, New York, United States

Email: george.giakos@manhattan.edu


Manuscript submission information:


The PRL's submission system (Editorial Manager®) will be open for submissions to our Special Issue from October 1st, 2025. When submitting your manuscript please select the article type VSI: EMDF-RCP. Both the Guide for Authors and the submission portal could be found on the Journal Homepage: Guide for authors - Pattern Recognition Letters - ISSN 0167-8655 | ScienceDirect.com by Elsevier.


Important dates


Submission Portal Open: October 1st, 2025

Submission Deadline: October 20th, 2025

Acceptance Deadline: April 1st, 2026


Keywords:


Robot Learning; Robot Perception; Multi-Modal Perception; Continuous Learning; Embodied AI; Data Fusion

登录用户可以查看和发表评论, 请前往  登录 或  注册


SCHOLAT.com 学者网
免责声明 | 关于我们 | 用户反馈
联系我们: