Noh S, Lee W, Myung H. Sample-efficient and occlusion-robust reinforcement learning for robotic manipulation via multimodal fusion dualization and representation normalization.
Neural Netw 2025;
185:107202. [PMID:
39908913 DOI:
10.1016/j.neunet.2025.107202]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 12/30/2024] [Accepted: 01/21/2025] [Indexed: 02/07/2025]
Abstract
Recent advances in visual reinforcement learning (visual RL), which learns from high-dimensional image observations, have narrowed the gap between state-based and image-based training. However, visual RL continues to face significant challenges in robotic manipulation tasks involving occlusions, such as lifting obscured objects. Although high-resolution tactile sensors have shown promise in addressing these occlusion issues through visuotactile manipulation, their high cost and complexity limit widespread adoption. In this paper, we propose a novel RL approach that introduces multimodal fusion dualization and representation normalization to enhance sample efficiency and robustness in robotic manipulation tasks involving occlusions - without relying on tactile feedback. Our multimodal fusion dualization technique separates the fusion process into two distinct modules, each optimized individually for the actor and the critic, resulting in tailored representations for each network. Additionally, representation normalization techniques, including LayerNorm and SimplexNorm, are incorporated into the representation learning process to stabilize training and prevent issues such as gradient explosion. We demonstrate that our method not only effectively tackles challenging robotic manipulation tasks involving occlusions but also outperforms state-of-the-art visual RL and state-based RL methods in both sample efficiency and task performance. Notably, this is achieved without relying on tactile sensors or prior knowledge, such as predefined low-dimensional coordinate states or pre-trained representations, making our approach both cost-effective and scalable for real-world robotic applications.
Collapse