深度强化学习的机械臂密集场景多物体抓取方法

Translated title of the contribution: Deep Reinforcement Learning for Manipulator Multi-Object Grasping in Dense Scenes

Xin Li, Jie Shen, Kai Cao, Tao Li

Research output: Contribution to journalArticlepeer-review

Abstract

Robots are prone to collisions while grasping objects in cluttered scenes, relying on pushing to create space for grasping. Existing push-grasping collaborative methods demonstrate low sample efficiency and grasping success rates. To address these problems, a new deep reinforcement learning method based on DDQN (double deep Q network) is proposed to efficiently learn excellent push-grasp cooperative strategies. The system incorporates a mask function that screens effective actions, allowing the robot to focus on samples that facilitate efficient learning. Additionally, the push reward function is designed using the difference between the average relative distances of all objects in the workspace before and after pushing, which allows for a more precise assessment of the impact of candidate pushing on density. The experimental results of the method with VPG (visual pushing grasping) are analyzed to show that the proposed method accelerates the training process while improving the grasping success rate, and verify that the system can be fully adapted to real world.

Translated title of the contributionDeep Reinforcement Learning for Manipulator Multi-Object Grasping in Dense Scenes
Original languageChinese (Traditional)
Pages (from-to)325-332
Number of pages8
JournalComputer Engineering and Applications
Volume60
Issue number23
DOIs
StatePublished - 1 Dec 2024

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Manipulator Multi-Object Grasping in Dense Scenes'. Together they form a unique fingerprint.

Cite this